In this issue of JAMA Health Forum, Misra-Hebert and colleagues1 report on their evaluation of a program of home quarantine with remote monitoring for adults who received COVID-19 diagnoses. Among participants in this program, the study found lower adjusted odds of hospitalizations within 30 and 90 days and higher adjusted odds of outpatient visits during these periods, with no differences in emergency department visits.
The authors defined the study treatment as (voluntary) participation in this program. The control group consisted of refusers and nonresponders who were offered enrollment in this program. By design, any characteristic associated with participation will be overrepresented in the participant group and underrepresented among the refusers compared with the entire eligible population. Thus, this design maximizes the difference between groups of these characteristics.
The mean treatment effect of participation that can be estimated under this design is the difference between the mean outcomes for those who participated in the program and refusers who chose not to participate. This as-treated estimate cannot be interpreted as a causal effect for 3 reasons. First, the distributions of measured covariates differed between the participants and refusers. This limitation was addressed by the overlap weighting step, which uses propensity scores to predict the difference between mean outcomes of participants and refusers as they would be if the 2 groups had a common marginal distribution of the observed covariates. Second, the distributions among participants and refusers of unmeasured covariates that were associated with outcomes (adjusted for measured covariates) may still have been different after reweighting, unless the probability of consent is a function only of observed covariates. Finally, the division of the population into participants and refusers did not correspond to the effect of any intervention that was implemented in this study.
If a control group were available, an intention-to-treat analysis would provide useful information. The control group would receive usual care without an offer of the monitoring program. The treatment would be defined by the protocol of the experimental program (ie the offer followed by the selected care management approach). This 2-part treatment would be a well-defined intervention. It mimics implementation of the program if the invitation to participate in the study has the same association with participation and outcomes as an invitation to enroll for home quarantine with monitoring would have if implemented routinely for patients with the same indications as the study population.
Identification of the control group is crucial when designing implementation studies like the study reported by Misra-Hebert et al.1 Randomization of treatment assignment will generally inspire the most confidence in the comparability of treatment and control responses, but observational designs, such as nonconcurrent controls from the same population, might be acceptable, although perhaps less so under the fast-changing circumstances of the pandemic. Although statistical adjustment for observed covariates might be required, the exaggerated potential confounding of the as-treated analysis could be avoided. The same data could be used to estimate a local average treatment effect for participants using an instrumental variables analysis, but only under the assumption (serving as the exclusion restriction, according to instrumental variables terminology) that the offer of the monitoring program affects the outcomes only through the actual uptake of monitoring.2(pp519-526)
Without a control group, the study by Misra-Hebert and colleagues does not provide conclusive evidence for the proposed intervention. Nonetheless, it is at least consistent with the desired treatment effect and supports reanalysis with retrospectively identified controls or a new prospectively controlled study.
Published: May 6, 2021. doi:10.1001/jamahealthforum.2021.0325
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Zaslavsky AM. JAMA Health Forum.
Corresponding Author: Alan M. Zaslavsky, PhD, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115 (email@example.com).
Conflict of Interest Disclosures: None reported.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Zaslavsky AM. Study on COVID-19 Home Monitoring—A Control Group Is Essential. JAMA Health Forum. 2021;2(5):e210325. doi:10.1001/jamahealthforum.2021.0325