[Skip to Navigation]
Sign In
Figure 1.  Association Between Percutaneous Coronary Intervention (PCI) Volume and Observed to Expected (O/E) Mortality Ratio
Association Between Percutaneous Coronary Intervention (PCI) Volume and Observed to Expected (O/E) Mortality Ratio

The quintiles were determined by O/E mortality ratio in a given year.

Figure 2.  Association Between Hospital Observed to Expected (O/E) Mortality Ratio in the Index Year and Hospital O/E Mortality Ratio in the Subsequent Year
Association Between Hospital Observed to Expected (O/E) Mortality Ratio in the Index Year and Hospital O/E Mortality Ratio in the Subsequent Year

Hospital-years were weighted by hospital PCI volume, with larger symbols indicating higher percutaneous coronary intervention (PCI) volume. Outlier status was defined as a 95% CI for mortality estimate that was higher or lower than the mean statewide mortality rate in that year.

Table 1.  Hospital-Year PCI Statistics by Quintiles of O/E Mortality Ratioa
Hospital-Year PCI Statistics by Quintiles of O/E Mortality Ratioa
Table 2.  Subsequent Change in Hospital PCI Performance by Quintiles of O/E Mortality Ratio in Index Yeara
Subsequent Change in Hospital PCI Performance by Quintiles of O/E Mortality Ratio in Index Yeara
Table 3.  Models Explaining Variance in O/E Mortality Ratio
Models Explaining Variance in O/E Mortality Ratio
1.
Hannan  EL, Cozzens  K, King  SB  III, Walford  G, Shah  NR.  The New York State cardiac registries: history, contributions, limitations, and lessons for future efforts to assess and publicly report healthcare outcomes.  J Am Coll Cardiol. 2012;59(25):2309-2316. doi:10.1016/j.jacc.2011.12.051PubMedGoogle ScholarCrossref
2.
Wadhera  RK, Joynt Maddox  KE, Yeh  RW, Bhatt  DL.  Public reporting of percutaneous coronary intervention outcomes: moving beyond the status quo.  JAMA Cardiol. 2018;3(7):635-640. doi:10.1001/jamacardio.2018.0947PubMedGoogle ScholarCrossref
3.
Hannan  EL, Kilburn  H  Jr, Racz  M, Shields  E, Chassin  MR.  Improving the outcomes of coronary artery bypass surgery in New York State.  JAMA. 1994;271(10):761-766. doi:10.1001/jama.1994.03510340051033PubMedGoogle ScholarCrossref
4.
Peterson  ED, DeLong  ER, Jollis  JG, Muhlbaier  LH, Mark  DB.  The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly.  J Am Coll Cardiol. 1998;32(4):993-999. doi:10.1016/S0735-1097(98)00332-5PubMedGoogle ScholarCrossref
5.
Waldo  SW, McCabe  JM, O’Brien  C, Kennedy  KF, Joynt  KE, Yeh  RW.  Association between public reporting of outcomes with procedural management and mortality for patients with acute myocardial infarction.  J Am Coll Cardiol. 2015;65(11):1119-1126. doi:10.1016/j.jacc.2015.01.008PubMedGoogle ScholarCrossref
6.
Wadhera  RK, Bhatt  DL.  Taking the “public” out of public reporting of percutaneous coronary intervention.  JAMA. 2017;318(15):1439-1440. doi:10.1001/jama.2017.12087PubMedGoogle ScholarCrossref
7.
Joynt  KE, Blumenthal  DM, Orav  EJ, Resnic  FS, Jha  AK.  Association of public reporting for percutaneous coronary intervention with utilization and outcomes among Medicare beneficiaries with acute myocardial infarction.  JAMA. 2012;308(14):1460-1468. doi:10.1001/jama.2012.12922PubMedGoogle ScholarCrossref
8.
Cavender  MA, Joynt  KE, Parzynski  CS,  et al.  State mandated public reporting and outcomes of percutaneous coronary intervention in the United States.  Am J Cardiol. 2015;115(11):1494-1501. doi:10.1016/j.amjcard.2015.02.050PubMedGoogle ScholarCrossref
9.
New York Department of Health. Cardiovascular Disease Data and Statistics. https://www.health.ny.gov/statistics/diseases/cardiovascular/. Accessed May 10, 2019.
10.
Nickell  S.  Biases in dynamic models with fixed effects.  Econometrica. 1981;49(6):1417-1426. doi:10.2307/1911408Google ScholarCrossref
11.
Arellano  M, Bover  O.  Another look at the instrumental variable estimation of error-components models.  J Econom. 1995;68(1):29-51. doi:10.1016/0304-4076(94)01642-DGoogle ScholarCrossref
12.
Roodman  D.  How to do xtabond2: an introduction to difference and system GMM in Stata.  Stata J. 2009;9(1):86-136. doi:10.1177/1536867X0900900106Google ScholarCrossref
13.
Moscucci  M, Eagle  KA, Share  D,  et al.  Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention databases.  J Am Coll Cardiol. 2005;45(11):1759-1765. doi:10.1016/j.jacc.2005.01.055PubMedGoogle ScholarCrossref
14.
Blumenthal  DM, Valsdottir  LR, Zhao  Y,  et al.  A survey of interventional cardiologists’ attitudes and beliefs about public reporting of percutaneous coronary intervention.  JAMA Cardiol. 2018;3(7):629-634. doi:10.1001/jamacardio.2018.1095PubMedGoogle ScholarCrossref
15.
Apolito  RA, Greenberg  MA, Menegus  MA,  et al.  Impact of the New York State Cardiac Surgery and Percutaneous Coronary Intervention Reporting System on the management of patients with acute myocardial infarction complicated by cardiogenic shock.  Am Heart J. 2008;155(2):267-273. doi:10.1016/j.ahj.2007.10.013PubMedGoogle ScholarCrossref
16.
Feldman  DN, Yeh  RW.  Public reporting of percutaneous coronary intervention mortality in New York State: are we helping our patients?  Circ Cardiovasc Qual Outcomes. 2017;10(9):e004027. doi:10.1161/CIRCOUTCOMES.117.004027PubMedGoogle Scholar
17.
Gupta  A, Yeh  RW, Tamis-Holland  JE,  et al.  Implications of public reporting of risk-adjusted mortality following percutaneous coronary intervention: misperceptions and potential consequences for high-risk patients including nonsurgical patients.  JACC Cardiovasc Interv. 2016;9(20):2077-2085. doi:10.1016/j.jcin.2016.08.012PubMedGoogle ScholarCrossref
18.
Resnic  FS, Welt  FG.  The public health hazards of risk avoidance associated with public reporting of risk-adjusted outcomes in coronary intervention.  J Am Coll Cardiol. 2009;53(10):825-830. doi:10.1016/j.jacc.2008.11.034PubMedGoogle ScholarCrossref
19.
Bricker  RS, Valle  JA, Plomondon  ME, Armstrong  EJ, Waldo  SW.  Causes of mortality after percutaneous coronary intervention.  Circ Cardiovasc Qual Outcomes. 2019;12(5):e005355. doi:10.1161/CIRCOUTCOMES.118.005355PubMedGoogle Scholar
20.
Fernandez  G, Narins  CR, Bruckel  J, Ayers  B, Ling  FS.  Patient and physician perspectives on public reporting of mortality ratings for percutaneous coronary intervention in New York State.  Circ Cardiovasc Qual Outcomes. 2017;10(9):e003511. doi:10.1161/CIRCOUTCOMES.116.003511PubMedGoogle Scholar
21.
Doll  JA, Dai  D, Roe  MT,  et al.  Assessment of operator variability in risk-standardized mortality following percutaneous coronary intervention: a report from the NCDR.  JACC Cardiovasc Interv. 2017;10(7):672-682. doi:10.1016/j.jcin.2016.12.019PubMedGoogle ScholarCrossref
22.
Wadhera  RK, O’Brien  CW, Joynt Maddox  KE,  et al.  Public reporting of percutaneous coronary intervention outcomes: institutional costs and physician burden.  J Am Coll Cardiol. 2019;73(20):2604-2608. doi:10.1016/j.jacc.2019.03.014PubMedGoogle ScholarCrossref
Original Investigation
September 18, 2019

Association Between Current and Future Annual Hospital Percutaneous Coronary Intervention Mortality Rates

Author Affiliations
  • 1Division of Cardiovascular Medicine, Department of Medicine, Stanford University School of Medicine, Stanford, California
  • 2Department of Cardiology, Keio University School of Medicine, Tokyo, Japan
  • 3Center for Health Policy, Department of Medicine, Stanford University, Stanford, California
  • 4Center for Primary Care and Outcomes Research, Department of Medicine, Stanford University, Stanford, California
  • 5Veterans Affairs Palo Alto Health Care System, Palo Alto, California
JAMA Cardiol. 2019;4(11):1077-1083. doi:10.1001/jamacardio.2019.3221
Key Points

Question  Are publicly reported measures of a hospital’s 30-day all-cause mortality after percutaneous coronary intervention associated with its mortality rates in subsequent years?

Findings  In this study, on the basis of risk-adjusted percutaneous coronary intervention–related mortality rates from 1998 to 2016 at 67 New York hospitals (960 hospital-years), the hospital observed to expected mortality ratio was weakly associated with the ratio in the following year. Hospitals identified as outliers with high or low mortality experienced regression to the mean the following year.

Meaning  Annual hospital-level percutaneous coronary intervention–related mortality rates were poorly associated with future performance but may not be useful for helping patients identify high-quality, low-mortality care.

Abstract

Importance  Multiple states publicly report a hospital’s risk-adjusted mortality rate for percutaneous coronary intervention (PCI) as a quality measure. However, whether reported annual PCI mortality is associated with a hospital’s future performance is unclear.

Objective  To evaluate the association between reported risk-adjusted hospital PCI-related mortality and a hospital’s future PCI-related mortality.

Design, Setting, and Participants  This study used data from the New York Percutaneous Intervention Reporting System from January 1, 1998, to December 31, 2016, to assess hospitals that perform PCI.

Exposures  Public-reported, risk-adjusted, 30-day mortality after PCI.

Main Outcomes and Measures  The primary analysis evaluated the association between a hospital’s reported risk-adjusted PCI-related mortality and future PCI-related mortality. The correlation between a hospital’s observed to expected (O/E) PCI-related mortality rates each year and future O/E mortality ratios was assessed. Multivariable linear regression was used to examine the association between index year O/E mortality and O/E mortality in subsequent years while adjusting for PCI volume and patient severity.

Results  This study included 67 New York hospitals and 960 hospital-years. Hospitals with low PCI-related mortality (O/E mortality ratio, ≤1) and high mortality (O/E mortality ratio, >1) had inverse associations between their O/E mortality ratio in the index year and the subsequent change in the ratio (hospitals with low mortality, r = −0.45; hospitals with high mortality, r = −0.60). Little of the variation in risk-adjusted mortality was explained by prior performance. An increase in the O/E mortality ratio from 1.0 to 2.0 in the index year was associated with a higher O/E mortality ratio of only 0.15 (95% CI, 0.02-0.27) in the following year.

Conclusions and Relevance  At hospitals with high or low PCI-related mortality rates, the rates largely regressed to the mean the following year. A hospital’s risk-adjusted mortality rate was poorly associated with its future mortality. The annual hospital PCI-related mortality may not be a reliable factor associated with hospital quality to consider in a practice change or when helping patients select high-quality hospitals.

Introduction

Public reporting of percutaneous coronary intervention (PCI) outcomes began in New York in 1991 as part of an effort to empower patients to make informed decisions when selecting a hospital or physician and to incentivize hospitals and physicians to improve their quality of care.1 Since then, multiple states have followed suit.2 Pennsylvania began reporting PCI-related mortality outcomes for myocardial infarction in 2001, and Massachusetts began reporting in 2005.

In cardiovascular surgery, earlier studies3,4 found that public reporting was associated with quality improvement activities among hospitals and a reduction in mortality. However, studies5,6 on PCI public reporting found unintended adverse consequences that have muted enthusiasm for this practice. An analysis7 of patients receiving fee-for-service Medicare found that introduction of public reporting in Massachusetts was associated with a reduction of coronary angiography and PCI rates. A subsequent analysis5 using an all-payer mix found a significant association between public reporting and increased mortality among patients with acute myocardial infarction. This finding was primarily driven by higher mortality in patients who did not undergo PCI after institution of public reporting. In contrast, patients with an acute myocardial infarction who underwent PCI in public reporting states had lower mortality rates.8 In composite, this finding suggests that public reporting may be associated with deferring PCI in high-risk patients who may benefit from the procedure.6

Despite these concerns, PCI-related mortality rates continue to be used as a quality measure to characterize hospital performance and identify outliers (ie, hospitals with excellent or poor performance). Patients may select hospitals with low PCI-related mortality rates in prior years, expecting lower mortality rates in the current year. However, the stability of PCI-related mortality as a hospital performance measure is unclear. In addition, there is no evidence, to our knowledge, that hospitals use these reports in a meaningful way to guide their quality programs. Therefore, to investigate whether PCI-related mortality is an appropriate quality measure, we evaluated year-to-year changes in performance and whether reported outcomes were associated with future mortality among New York hospitals.

Methods
Data

For this study, we used publicly available annual reports from the New York Percutaneous Intervention Reporting System from January 1, 1998, to December 31, 2016.9 These reports are released after a considerable delay (>2 years after the end of the calendar year being described). They contain observed and risk-adjusted mortality rates at the hospital level for all patients who undergo PCI in New York. These reports categorize outlier hospitals with reported mortality rates that are significantly higher or lower than the statewide rate. The Statewide Planning and Research Cooperative System audits medical records annually to ensure data accuracy. Institutional review board review was waived given lack of patient-level data.

Observed mortality was calculated as the number of deaths divided by the number of cases. Initial reports included only in-hospital mortality; since 2004, all mortality within 30 days of the index procedure was included. Our analysis compared hospital performance against other hospitals in the same year using the observed to expected (O/E) mortality ratio in a given year; thus, we included data before and after 2004.

Risk adjustment used a logistic regression model that evaluated the association between individual patient risk factors and mortality among all individuals who underwent PCI in a given year. In 2016, the model included the following patient characteristics: age, body mass index, presence of nonrefractory shock, left ventricular ejection fraction, myocardial infarction before PCI and the timing of the myocardial infarction, the presence of left main coronary artery disease, and comorbidities (cerebrovascular disease, chronic lung disease, congestive heart failure, diabetes with insulin therapy, malignant ventricular arrhythmia, peripheral vascular disease, and renal failure). The risk model predicting 30-day mortality after PCI had a C statistic of 0.876.

Annual reports included hospital risk-adjusted mortality rates and 95% CIs. If the lower limit of the 95% CI exceeded the statewide mean, the hospital was classified as a high-mortality outlier (poor performance). If the upper limit of the 95% CI was below the statewide mean, the hospital was classified as a low-mortality outlier.

Since 2006, New York has excluded PCI performed for patients with refractory shock (ie, patients with systolic blood pressure <80 mm Hg or cardiac index <2.0 L/min/m2 secondary to cardiac dysfunction before PCI despite pharmacologic or mechanical support). Starting in 2010, patients with anoxic brain injury from cardiac arrest before PCI who died after withdrawal of support were also excluded.1 Because we focused on a hospital’s relative performance compared with other hospitals in the same year (using the O/E mortality ratio), we included data from before and after these changes and varied this in a sensitivity analysis.

Outcomes
Observed Mortality Ratio and O/E Mortality Ratio

We normalized mortality statistics to compare hospital performance with the statewide mean in a given year. We divided a hospital’s observed mortality by the statewide mortality to determine the observed mortality ratio. A ratio above 1 indicated a higher crude mortality rate than the statewide mean in a given year.

Expected mortality was calculated based on the coefficients from the risk model calculated each year and a given patient’s characteristics. The annual report included hospitals’ expected mortality rates, calculated by averaging the expected mortality across a hospital’s patients undergoing PCI.

The risk-adjusted mortality rates, included in the public reports, represented a hospital’s expected mortality rate with a patient mix similar to the state mean. The O/E mortality ratios were not included in the public reports. We calculated O/E mortality ratios by dividing hospitals’ risk-adjusted mortality rate by statewide mortality. An O/E ratio above 1 indicated higher mortality than expected based on patient risk, whereas an O/E ratio below 1 indicated lower mortality than expected.

Patient Severity Ratio

We constructed a comparative metric, termed the patient severity ratio, for a hospital’s patient morbidity by dividing the expected mortality by statewide mortality. Larger patient severity ratios indicate increased prevalence of high-risk clinical features associated with increased mortality risk. A ratio above 1 signaled that a hospital had higher-risk patients than the statewide mean.

Statistical Analysis

We calculated PCI volume and mortality statistics for all hospital-years in the data set. We evaluated the change in PCI volume, patient severity, and normalized mortality statistics between consecutive years. Because of the nonnormal distributions, we present these statistics as medians and interquartile ranges (IQRs).

We divided hospital-years into quintiles based on their O/E mortality ratios compared with other hospitals in that year. We compared baseline characteristics, change in characteristics and performance during the subsequent year, and risk-adjusted mortality during the subsequent 2 years. Trends across quintiles were tested for significance using an extension of the Wilcoxon rank sum test.

We calculated Pearson correlation coefficients between index year O/E mortality ratio and PCI data for subsequent years. We also tested the association between change in O/E mortality ratio and change in patient severity ratio to evaluate the association of changes in coded patient severity after patient-level risk adjustment. A strong positive or negative association could suggest inadequate risk adjustment (eg, increased patient severity associated with increased risk-adjusted mortality) or gaming of the diagnosis coding (eg, increased patient severity associated with decreased risk-adjusted mortality).

Our primary analysis was the adjusted association between a hospital’s index year O/E mortality ratio and the O/E mortality ratio in the subsequent year. Modeling a dependent variable (O/E ratio in year t) based on previous values of the variable (O/E ratio in year t − 1) introduces potential statistical bias because of serial correlation in the regression error.10 The Arellano-Bover/Blundell-Bond method (commonly used in the econometric literature) uses a distinct statistical approach (the generalized method of moments) that solves this problem and produces an unbiased estimate (in large samples) of the association between the O/E mortality ratio in year t − 1 and the ratio in year t.11 We implemented this model using the xtabond2 command in Stata, version 14.2 (StataCorp) and tested the assumptions underlying this method.12 The detailed methods and robustness tests are presented in the eMethods and eTable 1 in the Supplement. We used robust SEs to account for variation across hospitals in the model error.

In a second model, we adjusted for PCI volume, change in patient severity, and prior-year (year t − 1) patient severity. In a third model, we added characteristics from 2 years earlier (year t − 2): O/E mortality ratio and patient severity. For each model, we determined the Pearson correlation between the predicted O/E mortality ratio and the actual O/E mortality ratio. We performed multiple sensitivity analyses for these models, excluding years with change in measure methods (2004, 2006, and 2010), limiting the modeled years to 2006 to 2016, and excluding hospitals in the lowest quintile of mean PCI volume. Additional details are given in eTable 1 and eTable 2 in the Supplement.

We compared hospitals classified as high- and low-mortality outliers based on risk-adjusted mortality with nonoutlier hospitals. We tested whether change in PCI volume or patient selection varied between hospitals with low and high mortality. We assessed the stability of outlier classification by assessing how frequently hospitals were designated as high- or low-mortality outliers during the study or within 3 years after outlier classification.

Analyses were performed using Stata, version 14.2. Statistical tests were deemed to be statistically significant at α = .05 (2-sided).

Results

There were 67 New York hospitals that accounted for 960 hospital-years with public PCI data from January 1, 1998, to December 31, 2016. Of the 899 hospital-years before 2016, a total of 893 hospital-years (99.3%) had mortality rates available for the following year. The 34 hospitals with public data in 1998 performed a mean of 980 PCIs (range, 137-2758) compared with a mean of 825 PCIs (range, 76-3479) across 61 hospitals in 2016.

Change in PCI Performance in the Year After an Index Measurement Period

We divided all hospital-years into quintiles based on O/E mortality ratios compared with other hospitals in a given year (Table 1). Median O/E mortality ratios ranged from 0.49 (IQR, 0.29-0.60) in quintile 1 to 1.75 (IQR, 1.57-2.05) in quintile 5. The median PCI volume was lower in hospitals in the lowest and highest mortality quintiles (Figure 1). Patient severity was minimally associated with O/E mortality (r = −0.05). Change in PCI volume (r = −0.01) or the patient severity ratio (r = 0.02) in the following year were also minimally associated with index year O/E mortality (Table 2).

The O/E mortality ratio during the index year was inversely associated with the change in O/E mortality ratio the following year (r = −0.65), suggesting regression to the mean (see eFigure 1 in the Supplement for hospital trends). This association persisted after stratifying hospital-years as lower (for O/E ratio ≤1, r = −0.45) or higher than expected mortality (for O/E ratio >1, r = −0.60) (Figure 2). We found a negative correlation between the change in patient severity ratio and the change in O/E mortality ratio (r = −0.14).

Association Between O/E Mortality Ratio and Future Performance

Differences in O/E mortality across index-year quintiles persisted but were substantially reduced in magnitude 1 year later (1.34 [IQR, 0.97-1.70] in the high-mortality quintile and 0.86 [IQR, 0.61-1.14] in the low-mortality quintile; P < .001). After adjustment for PCI volume and patient severity, an increase in the O/E mortality ratio from 1 to 2 was only associated with a 0.15 (95% CI, 0.02-0.27) increase in O/E mortality ratio in the subsequent year (Table 3). This finding suggests substantial regression to the mean; an increase from 0.92% (state mean) to 1.84% was associated with a decrease in risk-adjusted mortality of 0.79% the following year. There was a weak correlation between predicted and actual O/E mortality ratios (r = 0.32). The association remained after excluding years with changes in measure calculation methods, low-volume hospitals, or years before 2006 (Table 3). In these analyses, we did not find a significant association between change in patient severity and subsequent O/E mortality ratio. Additional model details are included in eTable 1 in the Supplement.

High- and Low-Mortality Outlier Hospital-Years Compared With Nonoutliers

During the study period, there were 35 hospital-years (3.6%) in which the 95% CI of the risk-adjusted mortality rate was greater than the statewide mean. The median O/E mortality was 2.24 (IQR, 1.91-2.57) among these hospital-years. There were 22 hospital-years (2.3%) with a 95% CI range lower than the state mean. Their median O/E mortality was 0.42 (IQR, 0.33-0.54). Among nonoutlier hospital-years, the median O/E mortality was 0.98 (IQR, 0.72-1.29).

The change in O/E mortality in the year after identification as an outlier also suggests regression to the mean. Among high-mortality outliers, the median change in O/E mortality was −0.96 (IQR, −1.45 to −0.52). In the 2 subsequent years after high-mortality outlier status, the median O/E mortality ratios were 1.36 (IQR, 1.02-1.61) and 1.29 (IQR, 0.86-1.55). Among low-mortality outliers, the median change in O/E mortality was 0.26 (IQR, 0.18-0.43). In the 2 years after low-mortality identification, the median O/E mortality ratios were 0.74 (IQR, 0.61-0.88) and 0.78 (IQR, 0.69-0.92).

Most high- and low-mortality outlier hospitals were only classified as such once (59.1% for high-mortality outlier hospitals and 61.5% for low-mortality outlier hospitals) during the entire follow-up period, but the 9 hospitals with more than 1 high-mortality outlier year were responsible for 22 years (mean of 4.2 years between high-mortality outlier classifications). The 5 hospitals with repeated low-mortality classifications were responsible for 14 outlier years (mean of 5.0 years between repeated classifications). In the 3-year period after an outlier year, a repeated outlier year was rare (eFigure 2 in the Supplement).

Discussion

We evaluated publicly reported PCI performance statistics among New York hospitals between 1998 and 2016 and found that a hospital’s risk-adjusted mortality rate was only weakly associated with its future mortality. At hospitals with high PCI-related mortality, PCI volume was not substantially reduced and patient severity in the following year did not change. However, these hospitals experienced substantial decreases in observed and risk-adjusted mortality. This finding likely represents regression to the mean given that low-mortality hospitals experienced analogous increases in mortality. In addition, given a delay of more than 2 years between the measurement period and the public report with risk-adjusted performance, it is unlikely the reduction in mortality between consecutive years represents report-driven practice improvement. Our results suggest that the annual variation in hospital differences in risk-adjusted mortality was associated with random variation and temporal trends that likely represent regression to the mean.

Prior analyses5,7,13-15 found that introduction of public reporting of PCI mortality led to decreased coronary angiography and PCI rates. These patterns would be favorable if they reflected a reduction in inappropriate or futile cases, but lower rates of PCI have been most pronounced among high-risk patients who may benefit from intervention. In a recent survey14 of interventional cardiologists in New York and Massachusetts, 66% reported avoiding PCI at least twice partially because of concern regarding the effect on publicly reported outcomes, and 59% reported being pressured by their colleagues to avoid PCI because of the high risk of patient death. Such risk aversion can potentially contribute to worse overall clinical outcomes.6,16,17 However, in our study, we found no difference in the change in patient severity between low- and high-mortality hospitals. Given the delay between the performance period and the performance reports, there may be a delay in the changes to patient selection.

For public reporting to be useful, the signal to noise ratio must be adequate to distinguish high- and low-quality care. Mortality after PCI is a rare event, with a mean risk of 1% in New York, approximately 9 annual deaths per hospital. Thus, just a few unexpected deaths can lead to a 50% increase in mortality. Although risk adjustment accounts for most variation in PCI-related mortality at the patient level, our analysis suggests that little of the remaining variation is secondary to hospital quality of care. Much of the remaining variance is more likely associated with unmeasured patient-level characteristics or random chance. The procedural quality of the interventional cardiologist may have a negligible effect on absolute risk of 30-day all-cause mortality. A prior analysis18 of deaths after PCI found that, in 79% of cases, no complication was identified from the PCI procedure related to the patient’s death. An analysis19 of 30-day PCI-related mortality in the Veterans Affairs health care system found that only 8% of deaths were directly attributable to the PCI procedure. Surveyed interventional cardiologists have also doubted that PCI-related mortality is associated with physician quality.20 If the quality of the procedure is only weakly associated with the outcome, it will require more observations and stronger risk adjustment to ensure that the weak signal overcomes the noise.

Recognizing the uncertainty of mortality point estimates, New York public reporting includes 95% CIs and only characterizes hospitals as outliers when the 95% CI does not include the statewide mean. This approach is critical for low-volume hospitals that have substantial increases in their mortality rates from a single death. However, classification differs for similar rates, depending on hospital volume (Figure 2). This finding limits the ability to characterize quality among low-volume centers. Use of longer overlapping follow-up periods, such as the annual reports of 3-year performance currently used for New York interventional cardiologist reports, would increase precision by increasing case volume and reducing random year-to-year variation. In addition, removing the mortality rate altogether and solely identifying outliers may be preferable to reduce public focus on point estimates that are not strongly associated with a patient’s risk when the report is released.

Despite the conservative definitions, outlier hospitals also experienced marked regression to the mean in the following year. Most outlier hospitals only had such classification for a single year during the 18-year follow-up. However, most outlier years were secondary to hospitals with repeated classification. Identifying outliers for further internal review and potential quality improvement efforts may be valuable. However, it is unclear whether the current process of delayed public reporting of outlier status is useful given that differences in mortality between high-mortality outliers and nonoutliers persist but are smaller by the time of reporting. Our findings are overall consistent with a prior evaluation21 of the reliability of a PCI-related mortality measure using the National Cardiovascular Data Registry. That analysis found that most physicians classified as high-mortality outliers rarely again had high mortality during a 5-year period.

Furthermore, public reports are released more than 2 years after the performance period. Although small differences persisted between low- and high-mortality hospitals in the following year, the association between the performance period and follow-up performance weakened over time. Given the imprecision of the estimates and change over time, risk-adjusted mortality estimates from more than 2 years ago may have minimal utility in patient decision-making. This lag may also limit the actionability of these scores for physicians. Collecting these data may still be valuable for public health monitoring and for identifying potential concerns that require closer review for safety. However, our results question the utility and validity of publicly reporting these rates. In addition, the potential benefits must be weighed against the substantial financial costs of PCI reporting.22

Although we should continue efforts to improve public transparency, improving public awareness should focus on disseminating actionable measures of hospital and physician quality. These measures could include a clinical outcome that the interventionalist can affect, such as access-site complications, process measures with strong associations with clinical outcomes and patient experience (such as radial artery access), or patient-centered outcomes for a disease state, such as the Seattle Angina Questionnaire among all patients with stable coronary artery disease.

Limitations

Important limitations to our analysis warrant discussion. We defined patient severity based on expected mortality calculated from characteristics included in the risk adjustment formula. This approach does not capture unmeasured characteristics that may influence patient severity. In addition, we were unable to differentiate between actual changes in severity and changes in coding. In New York, however, uniform registry data with random audits are used to ensure appropriate coding.

Conclusions

After evaluating the association between publicly reported New York hospital PCI-related mortality rates and PCI-related mortality in subsequent years, we found that reported rates were weakly associated with future hospital mortality. This analysis supports concerns that current all-cause 30-day PCI mortality measures are associated with random year-to-year variation rather than quality of care alone. Public reporting measures should be carefully evaluated to ensure that they appropriately identify outliers based on a true signal of quality and not random year-to-year noise.

Back to top
Article Information

Accepted for Publication: July 5, 2019.

Corresponding Author: Alexander T. Sandhu, MD, MS, Division of Cardiovascular Medicine, Department of Medicine, Stanford University, 300 Pasteur Dr, Stanford, CA 94305 (ats114@stanford.edu).

Published Online: September 18, 2019. doi:10.1001/jamacardio.2019.3221

Author Contributions: Dr Sandhu had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Sandhu, Bhattacharya, Heidenreich.

Acquisition, analysis, or interpretation of data: Sandhu, Kohsaka, Bhattacharya, Fearon, Harrington.

Drafting of the manuscript: Sandhu.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Sandhu, Bhattacharya.

Administrative, technical, or material support: Sandhu, Kohsaka, Harrington.

Supervision: Bhattacharya, Harrington, Heidenreich.

Conflict of Interest Disclosures: Dr Kohsaka reported receiving grants and personal fees from Bayer, receiving grants from Daiichi Sankyo, and receiving personal fees from Bristol-Myers Squibb, Pfizer, and AstraZeneca outside the submitted work. Dr Bhattacharya reported receiving grants from the National Institute on Aging during the conduct of the study. Dr Fearon reported receiving grants from Abbott Vascular, Medtronic, and CathWorks during the conduct of the study and other support from HeartFlow outside the submitted work. Dr Harrington reported serving on the board of directors (unpaid) for the American Heart Association and Stanford HealthCare. No other disclosures were reported.

References
1.
Hannan  EL, Cozzens  K, King  SB  III, Walford  G, Shah  NR.  The New York State cardiac registries: history, contributions, limitations, and lessons for future efforts to assess and publicly report healthcare outcomes.  J Am Coll Cardiol. 2012;59(25):2309-2316. doi:10.1016/j.jacc.2011.12.051PubMedGoogle ScholarCrossref
2.
Wadhera  RK, Joynt Maddox  KE, Yeh  RW, Bhatt  DL.  Public reporting of percutaneous coronary intervention outcomes: moving beyond the status quo.  JAMA Cardiol. 2018;3(7):635-640. doi:10.1001/jamacardio.2018.0947PubMedGoogle ScholarCrossref
3.
Hannan  EL, Kilburn  H  Jr, Racz  M, Shields  E, Chassin  MR.  Improving the outcomes of coronary artery bypass surgery in New York State.  JAMA. 1994;271(10):761-766. doi:10.1001/jama.1994.03510340051033PubMedGoogle ScholarCrossref
4.
Peterson  ED, DeLong  ER, Jollis  JG, Muhlbaier  LH, Mark  DB.  The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly.  J Am Coll Cardiol. 1998;32(4):993-999. doi:10.1016/S0735-1097(98)00332-5PubMedGoogle ScholarCrossref
5.
Waldo  SW, McCabe  JM, O’Brien  C, Kennedy  KF, Joynt  KE, Yeh  RW.  Association between public reporting of outcomes with procedural management and mortality for patients with acute myocardial infarction.  J Am Coll Cardiol. 2015;65(11):1119-1126. doi:10.1016/j.jacc.2015.01.008PubMedGoogle ScholarCrossref
6.
Wadhera  RK, Bhatt  DL.  Taking the “public” out of public reporting of percutaneous coronary intervention.  JAMA. 2017;318(15):1439-1440. doi:10.1001/jama.2017.12087PubMedGoogle ScholarCrossref
7.
Joynt  KE, Blumenthal  DM, Orav  EJ, Resnic  FS, Jha  AK.  Association of public reporting for percutaneous coronary intervention with utilization and outcomes among Medicare beneficiaries with acute myocardial infarction.  JAMA. 2012;308(14):1460-1468. doi:10.1001/jama.2012.12922PubMedGoogle ScholarCrossref
8.
Cavender  MA, Joynt  KE, Parzynski  CS,  et al.  State mandated public reporting and outcomes of percutaneous coronary intervention in the United States.  Am J Cardiol. 2015;115(11):1494-1501. doi:10.1016/j.amjcard.2015.02.050PubMedGoogle ScholarCrossref
9.
New York Department of Health. Cardiovascular Disease Data and Statistics. https://www.health.ny.gov/statistics/diseases/cardiovascular/. Accessed May 10, 2019.
10.
Nickell  S.  Biases in dynamic models with fixed effects.  Econometrica. 1981;49(6):1417-1426. doi:10.2307/1911408Google ScholarCrossref
11.
Arellano  M, Bover  O.  Another look at the instrumental variable estimation of error-components models.  J Econom. 1995;68(1):29-51. doi:10.1016/0304-4076(94)01642-DGoogle ScholarCrossref
12.
Roodman  D.  How to do xtabond2: an introduction to difference and system GMM in Stata.  Stata J. 2009;9(1):86-136. doi:10.1177/1536867X0900900106Google ScholarCrossref
13.
Moscucci  M, Eagle  KA, Share  D,  et al.  Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention databases.  J Am Coll Cardiol. 2005;45(11):1759-1765. doi:10.1016/j.jacc.2005.01.055PubMedGoogle ScholarCrossref
14.
Blumenthal  DM, Valsdottir  LR, Zhao  Y,  et al.  A survey of interventional cardiologists’ attitudes and beliefs about public reporting of percutaneous coronary intervention.  JAMA Cardiol. 2018;3(7):629-634. doi:10.1001/jamacardio.2018.1095PubMedGoogle ScholarCrossref
15.
Apolito  RA, Greenberg  MA, Menegus  MA,  et al.  Impact of the New York State Cardiac Surgery and Percutaneous Coronary Intervention Reporting System on the management of patients with acute myocardial infarction complicated by cardiogenic shock.  Am Heart J. 2008;155(2):267-273. doi:10.1016/j.ahj.2007.10.013PubMedGoogle ScholarCrossref
16.
Feldman  DN, Yeh  RW.  Public reporting of percutaneous coronary intervention mortality in New York State: are we helping our patients?  Circ Cardiovasc Qual Outcomes. 2017;10(9):e004027. doi:10.1161/CIRCOUTCOMES.117.004027PubMedGoogle Scholar
17.
Gupta  A, Yeh  RW, Tamis-Holland  JE,  et al.  Implications of public reporting of risk-adjusted mortality following percutaneous coronary intervention: misperceptions and potential consequences for high-risk patients including nonsurgical patients.  JACC Cardiovasc Interv. 2016;9(20):2077-2085. doi:10.1016/j.jcin.2016.08.012PubMedGoogle ScholarCrossref
18.
Resnic  FS, Welt  FG.  The public health hazards of risk avoidance associated with public reporting of risk-adjusted outcomes in coronary intervention.  J Am Coll Cardiol. 2009;53(10):825-830. doi:10.1016/j.jacc.2008.11.034PubMedGoogle ScholarCrossref
19.
Bricker  RS, Valle  JA, Plomondon  ME, Armstrong  EJ, Waldo  SW.  Causes of mortality after percutaneous coronary intervention.  Circ Cardiovasc Qual Outcomes. 2019;12(5):e005355. doi:10.1161/CIRCOUTCOMES.118.005355PubMedGoogle Scholar
20.
Fernandez  G, Narins  CR, Bruckel  J, Ayers  B, Ling  FS.  Patient and physician perspectives on public reporting of mortality ratings for percutaneous coronary intervention in New York State.  Circ Cardiovasc Qual Outcomes. 2017;10(9):e003511. doi:10.1161/CIRCOUTCOMES.116.003511PubMedGoogle Scholar
21.
Doll  JA, Dai  D, Roe  MT,  et al.  Assessment of operator variability in risk-standardized mortality following percutaneous coronary intervention: a report from the NCDR.  JACC Cardiovasc Interv. 2017;10(7):672-682. doi:10.1016/j.jcin.2016.12.019PubMedGoogle ScholarCrossref
22.
Wadhera  RK, O’Brien  CW, Joynt Maddox  KE,  et al.  Public reporting of percutaneous coronary intervention outcomes: institutional costs and physician burden.  J Am Coll Cardiol. 2019;73(20):2604-2608. doi:10.1016/j.jacc.2019.03.014PubMedGoogle ScholarCrossref
×