Association of Hospital Interoperable Data Sharing With Alternative Payment Model Participation

Key Points Question How much progress did US hospitals make toward building a nationally interoperable health system from 2014 to 2018, and was aligning financial incentives via alternative payment models associated with interoperability? Findings In this cohort study of 3928 US hospitals, progress in interoperability was slow, with fewer than half of hospitals reporting that they engaged in all 4 domains of interoperability by 2018. No evidence of an association between alternative payment model participation and interoperable data sharing was found. Meaning The results of this study suggest that building a nationally interoperable health care system remains a complex and challenging task that requires more than just alignment of financial incentives via voluntary payment reform programs.


Integrating:
We identified hospitals that integrated information into the EHR without manual intervention using the question "Does your EHR integrate the information contained in summary of care records received electronically (not eFax) without the need for manual entry?" Hospitals that responded "Yes, routinely" or "Yes, but not routinely" were considered as integrating data.

. Full AHA Survey Questions for Alternative Payment Model Participation
We created dichotomous measures of participation in each of the three APMs. For Accountable Care Organizations, we used the question "Has your hospital or health care system established an accountable care organization (ACO)?" for 2014 -2017, with hospitals that replied "Yes" considered to participate in an ACO for that year. For 2018, we used the question "Has your hospital or health care system established an accountable care organization (ACO)?" Hospitals that replied "My hospital currently leads an ACO" or "My hospital currently participates in an ACO (but is not its leader)" were considered to participate in an ACO that year.
For patient-centered medical homes, we used the question "Does your hospital have an established medical home program?" where hospitals that replied "Yes" were considered to participate in a PCMH. Finally, for bundled payments, we used the question "Does your hospital participate in a bundled payment program involving inpatient, physician, and/or post-acute care services where the hospital receives a single payment from a payer for a package of services and then distributes payments to participating care delivery organizations (such as a single fee for hospital and physician services for a specific procedure, e.g. hip replacement, CABG)?" where hospitals that responded "Yes" were considered to participate in a bundled payment program, for 2015 -2018 (bundled payment data was not available for 2014). Hospitals that participated in one or more APMs in a year were considered an APM participant.

eMethods 3. Technical Appendix and Robustness Tests for Two-Way Fixed Effects Design
Our primary specification is a two-way fixed effects model where the dependent variable is a binary indicator of whether or not a hospital reported engagement in all 4 domains of interoperability in a given year, and the independent variable of interest being whether a hospital participated in any alternative payment model in that year. We use hospital fixed effects to control for time-invariant unobserved confounding, and year fixed effects to control for the effect of the secular increase in interoperability over time. We also include a set of time-varying controls. Our primary analytic dataset is an unbalanced panel of hospitals from 2014 -2018, which includes 3,914 unique hospitals and 13,864 hospital-year observations. All models include robust standard errors clustered at the hospital level.
There are two critical assumptions necessary for two-way fixed effects to produce an unbiased average treatment effect estimate. These are the constant treatment effect assumption and the no unobserved time-varying confounders assumption. In this technical appendix, we discuss in detail our choice to use two-way fixed effects and perform several robustness tests on our main specification. We then show empirical tests of these two assumptions, and employ a new estimator that relaxes the constant treatment effect assumption.

Interoperability by Always, Sometimes, and Never APM Participants
First, we wanted to compare our Exhibit 3 where we show APM vs non APM hospitals in repeated cross-sections over the years with a setup that compares hospitals who were always a member of an APM during our study period, those who were sometimes an APM member, and those who were never an APM participant.

Variation in the Treatment Variable
First, we want to ensure that there are time-varying changes in the treatment variable, otherwise a fixed effects estimator will not have any variation to identify off of. There is significant churn in and out of each alternative payment model, as well as in our binary measure of hospital participation in any APM that year.

Hausman Test: Do We Need Fixed Effects?
Our first diagnostic test is to determine whether we need to use hospital-level fixed effects, or if a random effects will produce an unbiased estimate. To do this, we use a Hausman test to Never APM Sometimes APM Always APM determine whether our consistent estimator (fixed effects) produces differences in coefficients that are systematically different than our efficient (random effects) estimator.
The results of the Hausman test reject the null hypothesis of no systematic differences in coefficients, indicating that a random effects model would be biased. We use Stata's reghdfe command to iteratively remove singleton groups and ensure they do not bias standard error calculations. In this specification we find a null effect with a tight 95% confidence interval on the dummy variable for APM participation. Using the same setup as our previous model, but rather than a binary dummy variable for participation in any of the 3 APMs, we disaggregate them into the 3 individual APMs. We once again find a null effect for each, with small confidence intervals. We find similar results to our primary specification, suggesting no statistically significant effect of APM participation on the number of interoperability domains a hospital is engaged in.

Alternative Specification: Expressing the Dependent Variable as 3 Domains (Without Integration)
In this model, rather than using a dichotomous measure of all 4 domains of interoperability, we create a binary measure of hospital engagement in 3 domains -finding, sending, and receiving data, as integration of data as a conceptually different capability despite being an important aspect of data exchange. We once again use a linear model with two-way fixed effects to estimate the impact of APM participation on the marginal increase in interoperability engagement. We find qualitatively similar results, suggesting that measuring interoperability without the integration component does not change the association with alternative payment model participation.
While the fixed effects design is often called a within estimator, we wanted to ensure our comparison group was accurate. We ran robustness tests on our model where we dropped all hospitals who were always treated -that is, they were within an APM the entire sample period, and can be considered "left censored" with respect to the treatment variable. We found similar results as our main specification.

Alternative Specification: Dose-Response Effect
We wanted to evaluate whether there was a dose-response effect, that is, are hospitals participating in more APMs more likely to become interoperable upon joining a marginal additional APM? Once again, we find a null effect with small confidence intervals on the effect of joining a marginal APM.

Alternative Specification: Removing APM "Leavers"
There is churn in and out of alternative payment models over the course of the sample period. To ensure our results our robust, we wanted to estimate the effect of joining an APM on interoperability without the subset of hospitals that would then leave the APM. In this analysis, we discarded any hospitals that were participating in any APM and then left the APM in at any point in the study period.

Alternative Specification: Removing Interoperability "De-Adopters"
While the majority of hospitals, once they begin to participate in interoperability, stay that way for the remainder of our study period, due to either different interpretations of the response questions that make up our interoperability measures or actual reduction or de-adoption of data exchange, it may be possible some hospitals do not stay interoperable. In this specification we have excluded any hospital that reported that they engaged in all 4 domains of interoperability and then reported they did not in a following year.

Diagnostic Test: Heterogenous Treatment Effects
Recent empirical research on the use of two-way fixed effects (hereafter TWFE) has highlighted the possible shortcoming involved if there are heterogenous treatment effects. This is because these estimators identify weighted sums of average treatment effects (ATE) in each group with weights that may be negative, and those negative weights may create an instance where the estimand is negative despite all ATEs being positive. We use the diagnostic tests outlined in de Chaisemartin and D'Haultfoeuille (2020) to determine whether our model is susceptible to this bias. If the proportion of negative weights is high, a weighted fixed effects estimator is necessary.
We find that under two different sets of assumptions, only a very small proportion of the weighted sums are negative. The results of this diagnostic test indicate we do not need to be concerned that our estimate is biased by heterogenous treatment effects.

Event Study Framework
In this framework, we standardize the year a hospital joins an APM at t=0 and plot the coefficient estimates and 95% confidence intervals from t = -4 to t = 4, leaving out t = -1 as a comparison group. All event study regression models include two-way fixed effects and our time-varying covariates. This visual display also allows us to see whether APM participation appears to incentivize interoperability several years post-joining the APM. We use the method described in Freyaldenhoven S, Hansen C, Pérez JP, Shapiro JM. Visualization, Identification, and Estimation in the Linear Panel Event-Study Design. National Bureau of Economic Research; 2021. doi:10.3386/w29170, using the xtevent Stata package to estimate the linear models, In this estimator, we once again find a similar result -no statistically significant impact of joining an APM on interoperability. Instead, consistent with a potential explanation of how to reconcile our findings with previous cross-sectional evidence that found an association between APM participation and interoperability, we find that APM hospitals were more likely to engage in interoperability prior to joining the APM. It may be that an unobserved confounder is associated with early adoption of both data sharing and voluntary alternative payment models.

Relaxing the Constant Treatment Effect Assumption
To further test the robustness of our models, we use a new estimator, the two-way fixed-effects counterfactual estimator (hereafter FEct) developed by Liu, Wang, & Xu (2020). FEct is similar to the traditional two-way fixed effects estimator used in reghdfe, but relaxes the constant treatment assumption. This model requires a balanced panel and discards data with no time under the control (that is, all hospitals who were always treated -in an alternative payment model -are dropped.) We estimate this model and plot coefficient estimates and 95% confidence intervals (from bootstrapped standard errors) of the average treatment effect below at each year prior to and after joining an APM and find a similar null result: A second and related test is a variant of the equivalence test first proposed in Hartman and Hidalgo (2018). In this test we reverse the null hypothesis of the Wald test, and test if we find any evidence that pre-treatment residuals are non-zero. We use the default equivalence bound calculation from Hartman and Hidalgo of 0.36 * the standard deviation of the residualized nontreated outcome. We then check whether the minimum bounds of our pre-treatment period estimates are within the range of the equivalence bounds.