Columbo and colleagues1 investigate a novel instrumental variable method designed for risk-adjusted analysis of time-dependent outcomes. The topic is of interest to clinicians and health care researchers who always find it challenging to interpret contradictory findings from randomized clinical trials (RCTs) and real-world observational studies. Using data from 86 017 patients who underwent carotid revascularization in the Vascular Quality Initiative, Columbo et al1 used instrumental variable analysis to more accurately determine the relative long-term mortality after carotid endarterectomy (CEA) vs carotid artery stenting (CAS). The suggested method, defined as the proportion of CEA among the total carotid procedures (CEA and CAS) performed at each hospital in the 12 months prior to the index operation, demonstrated a hazard ratio of long-term mortality for CEA vs CAS of 0.83 (95% CI, 0.70-0.98) compared with 0.69 (95% CI, 0.65-0.74) using traditional Cox regression analysis and 0.71 (95% CI, 0.65-0.77) using propensity-matched analyses.1 The authors were able to show a survival benefit with CEA relative to CAS using unadjusted, adjusted, and propensity-matched models,1 which parallels the findings of other observational studies. However, the proposed instrumental variable provided less biased and more modest estimates of survival estimates, which more closely aligned with results from RCTs.1 Interestingly, there was no difference in survival between CEA and CAS in asymptomatic patients (hazard ratio, 0.90; 95% CI, 0.70-1.14) using the proposed method, whereas crude, adjusted, and propensity-matched analyses showed lower long-term mortality in this group of patients.
The evidence produced from the studies that compare different treatment modalities for vascular disease is of utmost importance to health care professionals and other decision makers. For example, the performance of CAS increased up until 2006 and then declined from 2007 to 2014, after the initial results from the Carotid Revascularization Endarterectomy vs Stenting Trial showed higher perioperative stroke and death rates after CAS compared with CEA. Nonetheless, there is still an open debate on the effectiveness of CAS because of the conflicting results of trials comparing the 2 procedures, and the discrepancy between what is being reported in RCTs and what is happening in the real world.1,2 Although long-term outcomes from the Carotid Revascularization Endarterectomy vs Stenting Trial have shown no difference in the composite outcomes of stroke, death, and myocardial infarction between the 2 treatment modalities,3 it was unknown whether the true applicability of these studies to individual centers and surgeons needed validation.4 This is relevant since a high percentage of patients who would have been excluded from RCTs undergo CAS in daily clinical practice.2
In a review of the patients with high-grade carotid artery stenosis who underwent CEA and CAS by a single vascular surgeon at our institution, the composite of major adverse events was equivalent for the 2 treatment modalities, supporting that of the Carotid Revascularization Endarterectomy vs Stenting Trial.4 Similarly, several all-comer registries in real-world settings have shown comparable results of CAS and CEA.2 This is attributable to improvements in patient selection, operator skills, and technological advancements in CAS. On the other hand, a meta-analysis of 8 RCTs (n = 7091) comparing the long-term outcomes of CAS vs CEA with follow-up ranging from 2 to 10 years as well as several other observational studies have shown a higher observed risk of stroke and death throughout follow-up with CAS, suggesting that CEA remains the treatment of choice for carotid stenosis.5
Clinical decision making is based on the appraisal of causality.6 Randomized clinical trials are likely to remain the gold standard for clinical decision making as they provide the most internally valid evidence and can take control of bias attributable to unmeasured differences between patients. However, the generalizability of RCTs is often compromised owing to highly structured selection criteria of patients and operators performing the procedures that often exclude large subgroups encountered in general practice such as patients with multiple comorbidities.6 Moreover, RCTs are not always feasible, practical, or ethical to conduct. In these situations, other designs must be used such as observational studies based on health care databases.2,7 Observational studies and systematic reviews with meta-analysis are supposed to provide more real-world generalizable evidence and are increasingly being used to estimate the effects of treatments, exposures, and interventions on outcomes. Nonetheless, they are often limited by an inherent residual bias from measurement errors and unmeasured confounding variables that cannot be adjusted for in this type of study. Since outcomes cannot be compared directly between treatment groups, various statistical methods must be used to minimize the effects of confounding and obtain an unbiased estimate of treatment effects.8
Time-to-event analysis, also called survival analysis, is a frequent outcome in most of these studies.8,9 Most studies use Cox proportional hazards modeling to compare the outcomes between treatment groups while adjusting for multiple confounders.9 However, Cox proportional hazards models should be interpreted with caution since they rely on 2 important assumptions; the first is that censoring is likely to be independent of hazard, and the second is that the hazard functions, representing the risk of an event over time, are proportional to each other for all patient groups.9 Another popular less biased, more robust, and more precise tool to adjust for confounders is propensity-matched analysis. This method forms matched sets of treated and untreated patients who have a similar probability of receiving the treatment based on their distribution of measured baseline covariates. However, propensity score methods are often applied incorrectly when estimating the effect of treatment on time-to-event outcomes. An example of common errors include the use of inappropriate statistical tests and the failure to correctly assess whether the specification of the propensity score model had prompted acceptable balance in baseline covariates between treated and untreated patients.8 Moreover, their performance in larger data sets with at least 8 to 10 events per variable is similar to or even worse than that provided by logistic regression or Cox proportional hazard analyses.8 Thus, clinical researchers should be aware using these currently available adjustment methods and readers should interpret them with caution as they can only address observed confounders, but dangerously overlook unobserved ones.
A relatively new technique for adjusting for unmeasured as well as measured risk factors, which is presented by Columbo et al,1 is the instrumental variable analysis. An instrumental variable is a variable that can mimic the treatment assignment process in a randomized study. A valid instrument induces changes in the explanatory variable but has no independent effect on the dependent variable, allowing a researcher to theoretically infer the causal effect of the explanatory variable on the dependent variable.10 This method is well known and explored in economics and has become more common and acknowledged in medical research.3 While the extent to which propensity-matched methods can control for hidden biases relies on the correlation between unmeasured prognostic variables and the measured covariates used to compute the score,10 instrumental variables have the advantage of being able to adjust for many confounders including unobserved ones.
Several approaches have been proposed for handling instrumental variables for time-to-event data with the right censoring.7,10 The instrumental variable method proposed by Columbo et al1 likely provides findings consistent with those of RCTs, is straightforward, and is easy to implement.1 However, it needs to be implemented in other cohorts and health care data sets as well as in the comparison of other treatment modalities to validate whether it can always provide more accurate and modest estimates of time-dependent outcomes than those suggested by traditional adjustment methods. Instrumental variables are most useful in studies with only moderate to small, rather than strong, confounding effects. In addition to using a sufficient sample size, an instrumental variable should satisfy 2 main criteria to provide a reasonable estimation of the treatment effect: (1) it should cause variation in the treatment variable, and (2) it should not have a direct effect on the outcome variable, only indirectly through the treatment variable. Validation of this method is thus crucial before we can safely use it to draw conclusions from observational studies.
As the authors mentioned in their study, when results of observational studies are concordant with randomized clinical trials, clear messages emerge for patients, clinicians, and payers to guide treatment decisions.1 An instrumental variable analysis cannot be used as a substitute for RCTs to make causal inference. However, an instrumental variable method that mitigates the selection bias and provides sufficient and appropriate adjustment for unmeasured confounding is crucial when comparing time-dependent outcomes between competing treatments for carotid revascularization and can therefore increase the utility of real-world evidence in guiding clinical decision making.
Published: September 7, 2018. doi:10.1001/jamanetworkopen.2018.1831
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2018 Dakour-Aridi H et al. JAMA Network Open.
Corresponding Author: Mahmoud B. Malas, MD, MHS, Vascular and Endovascular Research Center, Johns Hopkins Bayview Medical Center, 4940 Eastern Ave, Bldg A/5, Baltimore, MD 21224 (email@example.com).
Conflict of Interest Disclosures: None reported.
Dakour-Aridi H, Malas MB. Less Biased Estimation of the Survival Benefit of Carotid Endarterectomy Using Real-World Data: Bridging the Gap Between Observational Studies and Randomized Clinical Trials. JAMA Netw Open. 2018;1(5):e181831. doi:10.1001/jamanetworkopen.2018.1831
Customize your JAMA Network experience by selecting one or more topics from the list below.