Context Although stroke centers are widely accepted and supported, little is known about their effect on patient outcomes.
Objective To examine the association between admission to stroke centers for acute ischemic stroke and mortality.
Design, Setting, and Participants Observational study using data from the New York Statewide Planning and Research Cooperative System. We compared mortality for patients admitted with acute ischemic stroke (n = 30 947) between 2005 and 2006 at designated stroke centers and nondesignated hospitals using differential distance to hospitals as an instrumental variable to adjust for potential prehospital selection bias. Patients were followed up for mortality for 1 year after the index hospitalization through 2007. To assess whether our findings were specific to stroke, we also compared mortality for patients admitted with gastrointestinal hemorrhage (n = 39 409) or acute myocardial infarction (n = 40 024) at designated stroke centers and nondesignated hospitals.
Main Outcome Measure Thirty-day all-cause mortality.
Results Among 30 947 patients with acute ischemic stroke, 15 297 (49.4%) were admitted to designated stroke centers. Using the instrumental variable analysis, admission to designated stroke centers was associated with lower 30-day all-cause mortality (10.1% vs 12.5%; adjusted mortality difference, −2.5%; 95% confidence interval [CI], −3.6% to −1.4%; P < .001) and greater use of thrombolytic therapy (4.8% vs 1.7%; adjusted difference, 2.2%; 95% CI, 1.6% to 2.8%; P < .001). Differences in mortality also were observed at 1-day, 7-day, and 1-year follow-up. The outcome differences were specific for stroke, as stroke centers and nondesignated hospitals had similar 30-day all-cause mortality rates among those with gastrointestinal hemorrhage (5.0% vs 5.8%; adjusted mortality difference, +0.3%; 95% CI, −0.5% to 1.0%; P = .50) or acute myocardial infarction (10.5% vs 12.7%; adjusted mortality difference, +0.1%; 95% CI, −0.9% to 1.1%; P = .83).
Conclusion Among patients with acute ischemic stroke, admission to a designated stroke center was associated with modestly lower mortality and more frequent use of thrombolytic therapy.
Stroke is the leading cause of serious long-term disability and the third leading cause of mortality in the United States.1 Responding to the need for improvements in acute stroke care, the Brain Attack Coalition (BAC) published recommendations for the establishment of primary stroke centers in 2000.2 In December 2003, the Joint Commission began certifying stroke centers based on BAC criteria.3 Now, nearly 700 of the 5000 acute care hospitals in the United States are Joint Commission–certified stroke centers.4 Some states, such as New York, Massachusetts, and Florida, have established their own designation programs using the BAC core criteria.
Despite widespread support for the stroke center concept, there is limited empirical evidence demonstrating that admission to a stroke center is associated with lower mortality. Prior studies have largely focused on stroke processes of care, such as treatment timeline and use of thrombolytic therapy.5-8 There is comparably less information on whether better care at stroke centers improves acute or long-term mortality.9 Therefore, our goal was to evaluate the association between admission to stroke centers for acute ischemic stroke and mortality.
The primary data source was the New York Statewide Planning and Research Cooperative System (SPARCS), a comprehensive reporting system that collects patient-level data from every hospital admission in New York State. This study was approved by the University of Rochester's institutional review board, with waiver for informed consent.
We identified 33 090 hospitalized patients, 18 years of age or older, with a principal diagnosis of acute ischemic stroke between January 1, 2005, and December 31, 2006. An ischemic stroke diagnosis was verified through the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes 433.x1, 434.x1, and 436. We limited our study sample to only patients presenting with an initial stroke admission during the study period. We excluded 548 patients (1.7%) who lived outside of New York State and 123 patients (0.4%) with missing data. To avoid a bias against nondesignated hospitals, we also excluded 1472 patients (4.4%) for whom the distance from their home residence to the admitting hospital was greater than 20 miles, since these patients would be less likely to receive thrombolytic therapy. Consistent with the Centers for Medicare & Medicaid Services (CMS), all transfer patients were assigned to the transferring hospital. The final sample included 30 947 patients.
Stroke Center Designation
The New York State Stroke Center Designation program is a collaboration between the New York State Department of Health (NYSDOH), the American Heart Association (AHA), and the New York State Quality Improvement Organization.7,10 Beginning in 2004, all New York hospitals were invited to apply to the NYSDOH for stroke center designation if they met the BAC criteria. These criteria are organized around 11 aspects of stroke care: acute stroke teams, written care protocols, emergency medical services (EMS), emergency departments, stroke units, neurosurgical services, commitment and support of the medical organization, neuroimaging services, laboratory services, outcome and quality improvement activities, and continuing medical education.2 Hospitals were evaluated for stroke center designation with an initial hospital survey, followed by an on-site review and inspection, to ensure hospital compliance with the BAC criteria and preparedness to operate as a stroke center.
Of 244 New York hospitals, 104 (42.6%) became state-designated stroke centers by the end of 2006 (eFigure). Because some hospitals became stroke centers during the study period, we assigned stroke center status for each patient based on the hospital's designation at the time of admission.
Evaluation of in-hospital mortality may be confounded by different lengths of stay between stroke centers and nondesignated hospitals. Moreover, the CMS is considering including 30-day ischemic stroke mortality as one of its publicly reported measures of hospital quality of care.11 Therefore, as our primary outcome, we examined 30-day all-cause mortality among those who were and were not admitted to a stroke center. As secondary outcomes, we evaluated 1-day, 7-day, and 1-year all-cause mortality for a sensitivity analysis. Follow-up ended on the date of death or 1 year after the index hospitalization through 2007, whichever came first. Mortality after discharge was determined through the Social Security Administration Death Master File. In addition, we explored how the use of thrombolytic therapy (ICD-9-CM procedure code 99.10 and/or diagnosis-related group 559), discharge to skilled nursing facilities, and all-cause readmission within 30 days of the index hospital discharge differed by whether a patient was admitted to a designated stroke center. Patients who died during the index hospitalization were excluded from the readmission analyses.
The SPARCS reporting system provided data on patient characteristics, including sociodemographic information (age, sex, race/ethnicity, and insurance status) and comorbidities (differentiated from complications using a present-on-admission indicator). Comorbidities included prior myocardial infarction, congestive heart failure, atrial fibrillation, peripheral vascular disease, diabetes mellitus with or without complications, renal insufficiency, cancer, metastatic carcinoma, liver disease, chronic obstructive pulmonary disease, dementia, connective tissue disease disorder, and peptic ulcer disease. These comorbidities were used to construct a modified version of the Charlson comorbidity index tailored for ischemic stroke.12 Hospital characteristics, such as size and academic affiliation, were obtained from the NYSDOH and the American Hospital Association Annual Survey. We also determined whether a patient lived in a rural or urban area by applying the Rural-Urban Commuting Area Codes classification system to the patient's residential zip code.13
Because it would be impractical to randomize patients with acute ischemic stroke to designated stroke centers or nondesignated hospitals, researchers must rely on observational data to assess the association of stroke centers with mortality. However, both measured and unmeasured confounding inherent in observational studies may lead to selection bias for treatment. For example, EMS personnel may systematically transport more severely ill patients to stroke centers. Standard statistical approaches, such as multivariate logistic regression or propensity score analysis, cannot account for unmeasured confounding because they can only adjust for measured covariates.14,15 One approach is to use instrumental variable analysis (an econometric method) to help minimize unmeasured confounding.16,17
The key notion behind instrumental variable analysis is that the instrument is highly correlated with the treatment (stroke center vs nondesignated hospital) but is otherwise unrelated to observed or unobserved prognostic risk factors so that it does not directly or indirectly affect patient outcomes except through treatment.16,17 This is similar to a randomized controlled trial in which the randomization process assigns patients to treatment groups, but the randomization itself is not directly associated with outcomes.
In the case of stroke center admission, we used differential distance, which is an instrumental variable that has been used in prior studies of acute myocardial infarction and trauma.16-20 Differential distance was calculated as the difference between the straight-line distance from a patient's residence to the nearest stroke center minus the straight-line distance from this patient's residence to the nearest hospital of any type. The differential distance is the additional distance, if any, beyond the nearest hospital to reach a stroke center.
The choice of differential distance as an effective instrumental variable is based on 2 assumptions: first, it is logical to assume that a patient transported by private vehicle will go to the nearest hospital. Importantly, the New York State Stroke Protocol requires EMS personnel to transport stroke patients to the nearest stoke center if the prehospital time is less than 2 hours.21 Patients who live close to a stroke center are more likely to be transported to the stroke center. The second assumption is that patients cannot predict if and when they will have a stroke, and therefore, they do not choose their residence based on proximity to a given hospital. Thus, distance to each type of hospital is highly predictive of whether the patient was admitted to a stroke center but is not associated with disease characteristics such as stroke severity.
Baseline characteristics were compared between patients admitted to designated and nondesignated hospitals using the standardized difference. This method has been previously used to assess the comparability of study participants.22 We calculated standardized difference for continuous variables as follows:
where
Graphic Jump Location![Image description not available.]()
and
Graphic Jump Location![Image description not available.]()
denote the mean of a covariate in stroke center and nondesignated hospital patients while
s2sc and
s2nsc denote the variance. Standardized difference for binary variables was calculated as follows:
where Psc and Pnsc denote the prevalence of the binary variable. An absolute standardized difference greater than 10 (approximately equivalent to P < .05) indicates significant imbalance of a baseline covariate, whereas a smaller value supports the balance assumption between groups.
We then assessed whether admission to a designated stroke center was associated with lower mortality using an instrumental variable analysis estimated by a simultaneous 2-equation bivariate probit model.23 The first equation estimated the probability of stroke center admission as a function of differential distance and other covariates. The second equation assessed the association of stroke center admission with mortality, adjusted for other patient and hospital factors. Estimating 2 equations jointly using a bivariate probit approach provides consistent estimates of the treatment effect.23,24 The instrumental variable–adjusted mortality estimate (technically, the average marginal effect) can be interpreted as the mean predicted difference in the probability of death for stroke patients who received treatment at designated stroke centers because they lived relatively closer to stroke centers vs patients who received treatment at nondesignated hospitals because they lived farther away.
We examined the robustness of our findings in several ways. First, we sought to determine whether admission to a designated stroke center was associated with lower 1-day, 7-day, and 1-year all-cause mortality by repeating the analyses for these time points. Second, because the majority of stroke centers are located in the city of New York (eFigure), we performed subgroup analyses for patients living in the New York metropolitan area and for those in upstate New York. Third, white individuals and members of minority race/ethnic groups often live in different neighborhoods and may have systematically used different hospitals. We stratified the analysis by race and checked whether the effect of stroke centers varied by race/ethnicity group. Fourth, to determine if the mortality findings were specific to stroke, we compared mortality among patients admitted at designated and nondesignated hospitals for 2 other acute life-threatening conditions—gastrointestinal (GI) hemorrhage and acute myocardial infarction (AMI). Both conditions are quality indicators recommended by the Agency for Healthcare Research and Quality to assess a hospital's quality of care.25 If adjusted mortality was lower for either of these 2 conditions in designated stroke centers, this would suggest that lower stroke mortality would be due to these hospitals' overall commitment to quality improvement, rather than to these hospitals' implementation of actions specific to stroke.
All tests were evaluated at a 2-sided significance level of P < .05. The analyses were performed using SAS 9.2 (SAS Institute, Cary, North Carolina) and Stata 11 (StataCorp, College Station, Texas).
Among 30 947 patients with acute ischemic stroke, 15 297 (49.4%) were admitted to designated stroke centers (n = 104) and 15 650 (50.6%) to nondesignated hospitals. Table 1 compares baseline characteristics of the study cohort. Patients admitted to stroke centers were more frequently younger, non-Hispanic black, less likely to live in a rural area, and more likely to be admitted at a hospital with more beds and an academic affiliation. Patients admitted to stroke centers were relatively healthier with respect to the prevalence of comorbidities, although none of the differences were statistically significant.
In the assessment of the assumption that the instrumental variable is highly correlated with the variable of interest, we found from a logistic regression model that the differential distance was highly predictive of whether a patient was admitted to a stroke center (C statistic = 0.88). In the assessment of the assumption that the instrumental variable does not independently affect patient outcomes so that it is not associated with other potential confounders of the outcome, we examined the balance of observed health status according to the differential distance to a stroke center.
Table 2 shows baseline characteristics relative to whether a patient lived closer to a stroke center (differential distance = 0 miles or >0 miles). Although there were small differences in certain measures, age and the prevalence of most comorbidities were more similar than were the groups in Table 1, as reflected by diminishing standardized differences. Despite the similarity in observed health status, the differential distance groups differed substantially in their probability of being admitted to a stroke center. The high correlation between differential distance and stroke center, as well as the balance in observable health status, provide validation of the key instrumental variable assumptions. However, we did not have data to examine unmeasured stroke severity.
Mortality and Other Outcomes
Mortality rates among patients admitted to stroke centers and nondesignated hospitals are summarized in Table 3. The overall 30-day all-cause mortality rate was 10.1% for patients admitted to stroke centers and 12.5% for patients admitted to nondesignated hospitals (unadjusted mortality difference, −2.4%; P < .001). Using instrumental variable analysis, we found admission to a designated stroke center was associated with a 2.5% absolute reduction in 30-day all-cause mortality (adjusted mortality difference, −2.5%; 95% confidence interval [CI], −3.6% to −1.4%; P < .001).
Use of thrombolytic therapy was 4.8% (739/15 297) for patients admitted at stroke centers and 1.7% (266/15 650) for patients admitted at nondesignated hospitals (P < .001). Admission to a stroke center was associated with increased use of thrombolytic therapy (adjusted difference in thrombolysis use, 2.2%; 95% CI, 1.6% to 2.8%; P < .001). However, further adjustment for use of thrombolytic therapy in the instrumental variable models did not substantially alter the association of stroke center admission with lower 30-day mortality (adjusted mortality difference, −2.7%; 95% CI, −3.8% to −1.6%; P < .001). Among those surviving to hospital discharge, there was no difference in rates of 30-day all-cause readmission (14.8% vs 14.2%; adjusted difference, 1.1%; 95% CI, −0.3% to 2.6%; P = .12) and discharge to a skilled nursing facility (24.6% vs 28.5%; adjusted difference, −0.5%; 95% CI, −2.1% to 1.2%; P = .56).
In sensitivity analyses examining whether the association between stroke center admission and mortality varied by race, location, or time points, lower all-cause mortality was observed within the first hospital day and at 7 days, and the difference was sustained at 1 year after the index hospitalization (Table 3). Subgroup analyses of the New York metropolitan area and upstate New York and stratified analyses by race/ethnicity found similar results of lower all-cause mortality at designated stroke centers (Table 4).
In analyses to assess whether the lower mortality at designated stroke centers was specific to stroke, we examined mortality rates for patients admitted with GI hemorrhage (n = 39 409) and AMI (n = 40 024) at stroke centers and nondesignated hospitals. Thirty-day all-cause mortality for GI hemorrhage was comparable for patients admitted to stroke centers and nondesignated hospitals (5.0% vs 5.8%; adjusted mortality difference, +0.3%; 95% CI, −0.5% to 1.0%; P = .50). Similarly, 30-day all-cause mortality for AMI did not significantly differ between the 2 groups (10.5% vs 12.7%; adjusted mortality difference, +0.1%; 95% CI, −0.9% to 1.1%; P = .83). For these 2 conditions, there also were no differences in 1-day or 7-day mortality (Table 5). Based on sample size and observed mortality rates, a retrospective power analysis indicated that our study had more than 90% statistical power to detect a 0.1% mortality difference for AMI and 70% power to detect a 0.3% mortality difference for GI hemorrhage.
Reduced mortality and increased use of acute stroke therapies are 2 expected benefits of primary stroke centers.2 Nevertheless, limited empirical evidence supports the benefits of stroke centers—in particular, outcome-based quality measures.9 In this large observational study, we found that patients admitted to stroke centers were more likely to receive thrombolytic therapy and had lower 30-day mortality rates when compared with patients admitted to nondesignated hospitals. This survival benefit was sustained for up to 1 year after stroke occurrence and was independent of patient and hospital characteristics. Importantly, the lower mortality at designated stroke centers was specific to stroke and was not found for other acute life-threatening conditions, suggesting that the mortality benefit was related to stroke center designation, rather than to overall quality improvement efforts at designated stroke centers. Even though the differences in outcomes between stroke centers and nondesignated hospitals were modest, our study suggests that the implementation and establishment of a BAC-recommended stroke system of care was associated with improvement in some outcomes for patients with acute ischemic stroke.
Previous evaluations of stroke center quality performance have primarily focused on process measures with limited information on patient outcomes.5-8 To date, only 1 study in Finland has reported lower 1-year stroke case fatality associated with stroke centers.26 Our study extends the findings from this prior study, as systems of stroke care in the United States may differ substantially from other national health care systems (especially those with universal health coverage). We were able to report both short-term and 1-year mortality outcomes and to demonstrate that lower mortality was specific to stroke at designated stroke centers.
Geographic patterns of stroke triage are likely to be nonrandom. Designated stroke centers and nondesignated hospitals may treat different groups of patients in terms of demographics and disease severity. For instance, it is possible that EMS personnel may systematically transport more severely ill patients to stroke centers,4 which is consistent with our finding of a greater instrument variable–adjusted mortality difference (in absolute value) compared with the unadjusted difference (eg, 2.5% vs 2.4% for 30-day all-cause mortality). Moreover, prior studies have reported that stroke centers are more likely to admit patients with hemorrhagic stroke, which is associated with higher mortality compared with ischemic stroke.7,8 Indeed, we found a similar pattern, in which nearly 60% (4193/7243) of patients with hemorrhagic stroke in New York were admitted to a stroke center during our study period.
In the absence of randomized controlled trials, controlling for treatment patterns is often difficult and assessments of mortality outcomes may be biased given the presence of treatment selection. Our analysis sought to address these concerns by using an instrumental variable analysis to control for the selection bias (both measured and unmeasured) inherent in observational studies. After adjusting for patient and hospital characteristics and the potential for unmeasured selection bias with the instrumental variable analysis, we found that admission to a stroke center for an acute ischemic stroke was associated with a 2.5% absolute reduction in 30-day all-cause mortality.
The BAC recommendations serve as the cornerstone for the establishment of primary stroke centers. Previous studies have shown reduced mortality among patients who were treated by neurologists or who received organized care in a stroke care unit.27-29 Although we cannot determine which individual components of the BAC criteria for stroke center designation were most important for the lower mortality observed in this study, it is likely that the BAC criteria cannot be examined as individually isolated units. Rather, the 11 core criteria combined establish the infrastructure and define an approach for optimizing care for acute ischemic stroke. By emphasizing an integrated and organized system of care with EMS, hospital emergency departments, acute stroke teams, stroke units, and neuroimaging services, the BAC criteria facilitate rapid transportation, evaluation, and treatment. Moreover, availability of stroke protocols standardizes acute stroke care and minimizes protocol violation. These efforts are further enhanced by the BAC criteria's emphasis on surveillance of outcomes, quality initiatives, and continuing educational programs.
Of equal importance are the improved performance measures, which may also affect downstream care and outcomes. Previous studies have found association between process of care performance measures and mortality outcomes for patients with cardiovascular disease and stroke.30,31 It is possible that improved guideline-based treatment, more frequent use of thrombolytic therapy, enhanced secondary prevention and risk factor management, early rehabilitation, and patient education programs also may contribute to the lower mortality rates among patients treated at stroke centers. However, these efforts may not have any appreciable short-term or immediate life-saving effect, which is consistent with our findings of minimal mortality difference at day 1 and similar readmission rates at 30 days compared with greater survival benefit at the end of 1-year follow-up. Collectively, it is likely that the combinations of these efforts improve the structure and process of stroke care and subsequently contribute to improved patient outcomes.
Since stroke center certification is voluntary, it is possible that hospitals were already committed to quality improvement and would have achieved these results regardless of designation. A recent evaluation of Joint Commission–certified primary stroke centers found that certified hospitals had better outcomes than noncertified hospitals even before the certification program began.32 Based on our data, we cannot definitively establish if the designation program resulted in reduced mortality or if higher-quality hospitals participated in designation. However, this concern is mitigated by our specificity analysis, in which we examined mortality rates at designated and nondesignated hospitals for 2 other life-threatening conditions—GI hemorrhage and AMI. Hospitals committed to quality improvement prior to stroke center designation would be expected to demonstrate lower mortality for other medical conditions, as well as for stroke. Nevertheless, the lower mortality observed in this study was specific to stroke, thus suggesting that the lower stroke mortality could not be explained simply by the fact that hospitals who received stroke designation were more likely to implement hospital-wide quality improvement.
Our study should be interpreted in the context of the following limitations. First, the SPARCS database did not include information on stroke severity. The differences in mortality may be due to patient case mix, as opposed to variation in the quality of acute stroke care. However, selection bias is more likely to be against admission to a stroke center rather than favor stroke centers.4
Second, we were unable to assess other performance and outcomes, such as eligibility and contradiction of thrombolytic therapy, thrombolysis-related hemorrhage, quality of life, and neurological and functional status at discharge, since these measures were not collected in the SPARCS database, nor were we able to assess cause-specific mortality. Nonetheless, our study was able to report on the relationship between stroke center admission and all-cause mortality—an outcome that has not been routinely reported.
Third, while our sensitivity and specificity analyses suggest that lower mortality associated with stroke center admission may be due to the implementation of the BAC criteria as part of stroke center designation, other quality-improvement initiatives (eg, the AHA Get With The Guidelines–Stroke program), economic incentives from pay for performance, and public reporting could also influence stroke care and outcomes.
Fourth, our study only included data from New York. The generalizability of our findings to other states and agencies certifying stroke centers remains to be established. Fifth, we were unable to assess acute treatments other than thrombolytic therapy, including the use of life-sustaining interventions and end-of-life care, which may affect short-term or intermediate survival.33 Sixth, many hospitals were transitioning to stroke center during our study period. Defining a stroke center based on designation status on the admission date may have underestimated the mortality differences with stroke center admission. Nonetheless, our conservative approach still demonstrates a lower risk of death associated with stroke centers. Finally, the instrumental variable approach assumes that differential distance has no independent effect on patient outcomes except through its impact on the likelihood of receiving treatment at the designated stroke center. The assumption by its very nature is unproven. However, this assumption would generally be satisfied if a patient's residence is not associated with stroke severity, which appears reasonable. Moreover, differential distance has been widely and successfully used as an instrument to control for selection bias in a variety of clinical settings.16-20
In conclusion, we found that admission to designated stroke centers in New York State was associated with a modestly lower risk of death for patients with an acute ischemic stroke, and the lower mortality in designated stroke centers appeared specific to stroke.
Corresponding Author: Ying Xian, MD, PhD, Duke Clinical Research Institute, 2400 Pratt St, Durham, NC 27705 (ying.xian@duke.edu).
Author Contributions: Dr Xian had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Xian, Holloway, Friedman.
Acquisition of data: Xian, Friedman.
Analysis and interpretation of data: Xian, Holloway, Chan, Noyes, Shah, Ting, Chappel, Peterson, Friedman.
Drafting of the manuscript: Xian, Chan.
Critical revision of the manuscript for important intellectual content: Xian, Holloway, Chan, Noyes, Shah, Ting, Chappel, Peterson, Friedman.
Statistical analysis: Xian.
Obtained funding: Xian, Holloway, Friedman.
Administrative, technical, or material support: Xian, Holloway, Friedman.
Study supervision: Xian, Holloway, Noyes, Shah, Friedman.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Holloway reported being a consultant for Milliman Guidelines, reviewing neurology guidelines. Dr Peterson reported receiving research grants from Schering Plough, Bristol-Myers Squibb, Merck/Schering Plough, Sanofi Aventis, and Saint Judes and being a consultant/advisory board member to Pfizer and Bayer Corp. No other disclosures were reported.
Funding/Support: This study was funded in part by a predoctoral fellowship 0815772D from the American Heart Association (AHA) Founders Affiliate (Dr Xian). This study received infrastructure support from the Agency for Healthcare Research and Quality (AHRQ) (U18HS016964). Dr Chan is supported by a Career Development Grant Award (K23HL102224) from the National Heart, Lung, and Blood Institute. Dr Shah is supported by the Paul B. Beeson Career Development Award (National Institute on Aging 1K23AG028942).
Role of the Sponsors: The funding organizations had no role in the design and conduct of the study; in the collection, analysis, and interpretation of the data; or in the preparation, review, or approval of the manuscript.
Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the AHA and AHRQ. This study used a linked SPARCS-SSADMF database. The interpretation and reporting of these data are the sole responsibility of the authors. Dr Peterson, a contributing editor for JAMA, was not involved in the editorial review of or the decision to publish this article.
Previous Presentation: This study was presented in part at the 2010 American Heart Association Quality of Care and Outcomes Research Conference; Washington, DC; May 20, 2010.
Additional Contributions: We acknowledge the insightful contributions of Laine Thomas, PhD; Wenqin Pan, PhD; and Margueritte Cox, MS (Duke Clinical Research Institute, Durham, North Carolina), for assistance with statistical analyses and geomapping. We thank Zainab Magdon-Ismail, EdM, MPH (American Heart Association, Founders Affiliate); Toby I. Gropen, MD (Long Island College Hospital, New York); Nancy R. Barhydt, DrPH, RN; and Anna Colello, Esq (New York State Department of Health), for assistance with data. None were compensated for their contributions.
1.Lloyd-Jones D, Adams RJ, Brown TM,
et al; American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics: 2010 update: a report from the American Heart Association [published correction appears in
Circulation. 2010;121(12):e260].
Circulation. 2010;121(7):e46-e21520019324
PubMedGoogle ScholarCrossref 2.Alberts MJ, Hademenos G, Latchaw RE,
et al. Recommendations for the establishment of primary stroke centers: Brain Attack Coalition.
JAMA. 2000;283(23):3102-310910865305
PubMedGoogle ScholarCrossref 5.Lattimore SU, Chalela J, Davis L,
et al; NINDS Suburban Hospital Stroke Center. Impact of establishing a primary stroke center at a community hospital on the use of thrombolytic therapy: the NINDS Suburban Hospital Stroke Center experience.
Stroke. 2003;34(6):e55-e5712750543
PubMedGoogle ScholarCrossref 6.Douglas VC, Tong DC, Gillum LA,
et al. Do the Brain Attack Coalition's criteria for stroke centers improve care for ischemic stroke?
Neurology. 2005;64(3):422-42715699369
PubMedGoogle ScholarCrossref 7.Gropen TI, Gagliano PJ, Blake CA,
et al; NYSDOH Stroke Center Designation Project Workgroup. Quality improvement in acute stroke: the New York State Stroke Center Designation Project.
Neurology. 2006;67(1):88-9316832083
PubMedGoogle ScholarCrossref 8.Stradling D, Yu W, Langdorf ML,
et al. Stroke care delivery before vs after JCAHO stroke center certification.
Neurology. 2007;68(6):469-47017283326
PubMedGoogle ScholarCrossref 9.Fonarow GC, Gregory T, Driskill M,
et al. Hospital certification for optimizing cardiovascular disease and stroke quality of care and outcomes.
Circulation. 2010;122(23):2459-246921098429
PubMedGoogle ScholarCrossref 12.Goldstein LB, Samsa GP, Matchar DB, Horner RD. Charlson Index comorbidity adjustment for ischemic stroke outcome studies.
Stroke. 2004;35(8):1941-194515232123
PubMedGoogle ScholarCrossref 14.Rosenbaum PR, Rubin DB. Reducing bias in observational studies using subclassification on the propensity score.
J Am Stat Assoc. 1984;79(387):516-524
Google ScholarCrossref 15.Stukel TA, Fisher ES, Wennberg DE, Alter DA, Gottlieb DJ, Vermeulen MJ. Analysis of observational studies in the presence of treatment selection bias: effects of invasive cardiac management on AMI survival using propensity score and instrumental variable methods.
JAMA. 2007;297(3):278-28517227979
PubMedGoogle ScholarCrossref 16. McClellan M, McNeil BJ, Newhouse JP. Does more intensive treatment of acute myocardial infarction in the elderly reduce mortality? analysis using instrumental variables.
JAMA. 1994;272(11):859-8668078163
PubMedGoogle ScholarCrossref 17.Newhouse JP, McClellan M. Econometrics in outcomes research: the use of instrumental variables.
Annu Rev Public Health. 1998;19(1):17-349611610
PubMedGoogle ScholarCrossref 18.Beck CA, Penrod J, Gyorkos TW, Shapiro S, Pilote L. Does aggressive care following acute myocardial infarction reduce mortality? analysis with instrumental variables to compare effectiveness in Canadian and United States patient populations.
Health Serv Res. 2003;38(6 pt 1):1423-144014727781
PubMedGoogle ScholarCrossref 19. McConnell KJ, Newgard CD, Mullins RJ, Arthur M, Hedges JR. Mortality benefit of transfer to level I versus level II trauma centers for head-injured patients.
Health Serv Res. 2005;40(2):435-45715762901
PubMedGoogle ScholarCrossref 20.Pracht EE, Tepas JJ III, Celso BG, Langland-Orban B, Flint L. Survival advantage associated with treatment of injury at designated trauma centers: a bivariate probit model with instrumental variables.
Med Care Res Rev. 2007;64(1):83-9717213459
PubMedGoogle ScholarCrossref 22.Austin PC. Using the standardized difference to compare the prevalence of a binary variable between two groups in observational research.
Comm Statist Simulation Comput. 2009;38(6):1228-1234
Google ScholarCrossref 23.Greene WH. Econometric Analysis. 5th ed. Upper Saddle River, NJ: Prentice Hall; 2003
24.Bhattacharya J, Goldman D, McCaffrey D. Estimating probit models with self-selected treatments.
Stat Med. 2006;25(3):389-41316382420
PubMedGoogle ScholarCrossref 26.Meretoja A, Roine RO, Kaste M,
et al. Effectiveness of primary and comprehensive stroke centers: PERFECT stroke: a nationwide observational study from Finland.
Stroke. 2010;41(6):1102-110720395609
PubMedGoogle ScholarCrossref 27.Stroke Unit Trialists' Collaboration. Organised inpatient (stroke unit) care for stroke.
Cochrane Database Syst Rev. 2007;(4):CD00019717943737
PubMedGoogle Scholar 28.Mitchell JB, Ballard DJ, Whisnant JP, Ammering CJ, Samsa GP, Matchar DB. What role do neurologists play in determining the costs and outcomes of stroke patients?
Stroke. 1996;27(11):1937-19438898795
PubMedGoogle ScholarCrossref 29.Goldstein LB, Matchar DB, Hoff-Lindquist J, Samsa GP, Horner RD. VA Stroke Study: neurologist care is associated with increased testing but improved outcomes.
Neurology. 2003;61(6):792-79614504322
PubMedGoogle ScholarCrossref 30.Peterson ED, Roe MT, Mulgund J,
et al. Association between hospital process performance and outcomes among patients with acute coronary syndromes.
JAMA. 2006;295(16):1912-192016639050
PubMedGoogle ScholarCrossref 31.Bravata DM, Wells CK, Lo AC,
et al. Processes of care associated with acute stroke outcomes.
Arch Intern Med. 2010;170(9):804-81020458088
PubMedGoogle ScholarCrossref 32.Lichtman JH, Allen NB, Wang Y, Watanabe E, Jones SB, Goldstein LB. Stroke patient outcomes in US hospitals before the start of the Joint Commission Primary Stroke Center certification program.
Stroke. 2009;40(11):3574-357919797179
PubMedGoogle ScholarCrossref 33.Holloway RG, Quill TE. Mortality as a measure of quality: implications for palliative and end-of-life care.
JAMA. 2007;298(7):802-80417699014
PubMedGoogle ScholarCrossref