The coronavirus disease 2019 (COVID-19) pandemic has affected every area of society, and clinical investigation is no exception.1 The unprecedented speed and volume of scientific reports related to COVID-19, while challenging for editors and reviewers, is a testament to investigators’ determination to continue their work.2
On the other hand, the pandemic has also necessitated substantial changes in the conduct of medical research, particularly clinical trials and prospective observational studies. Maintaining participant and study staff safety has often required modifications to the design and conduct of studies, impacting everything from recruitment to follow-up and sometimes delaying analysis and reporting of results. In this Editorial we provide guidance to authors on how to acknowledge any COVID-19 pandemic–related changes to their studies in submitted manuscripts. Our goal is to make authors aware of expectations regarding transparent communication of study results that may have been affected by the pandemic. Specifically, for studies that have been affected by the pandemic, the Methods section should include a subsection titled “Changes in Response to the COVID-19 Pandemic” and incorporate the following information, as applicable.
Pandemic research conditions may require a range of revisions to study protocols. These may include modifications to the eligibility criteria, the intervention, the outcome measure, and the analysis. As expected with any clinical trial, the editors expect that any significant modifications to the protocol will be clearly indicated in an amendment to the protocol and will be reflected in an updated study registration (eg, at ClinicalTrials.gov or equivalent). If the primary outcome is modified, this will need to be justified, and the original primary outcome must still be presented even if only available for a subset of patients. Such modifications to the protocol may require review by an ethics oversight board or data safety monitoring board, which should also be noted (even if such review was formally waived or exempted).
Changes to the study population or intervention, even those the investigators believe are minimal, may lead to heterogeneous effects within the trial. Supplemental analyses should be presented to investigate this possibility by exploring treatment effect interactions by subpopulations or changes in interventions, including intervention delivery, necessitated by COVID-19.
Trial Delays and Interruptions
We recognize that the pandemic may lead to a delay in reporting results, which will not preclude consideration for publication in JAMA Network Open, if reasonable and justified in the manuscript. Any interruptions in study recruitment or follow-up should be stated with start and stop dates.
Missing Data and Statistical Power Statement
As noted in the journal’s Instructions for Authors, “For randomized trials, a statement of the power or sample size calculation is required. … For observational studies that use an established population, a power calculation is not generally required when the sample size is fixed. However, if the sample size was determined by the researchers, through any type of sampling or matching, then there should be some justification for the number sampled.”3 While missing data because of dropout and/or loss to follow-up present an important potential challenge for any clinical trial, these challenges may be greater than anticipated for trials conducted during the COVID-19 pandemic (eg, if participants are less able to attend follow-up visits or find remote interventions or assessments less engaging). In addition, trials conducted during the pandemic may be forced to end enrollment or follow-up earlier than planned. Such an early termination will also result in missing data (eg, in the assessment of outcomes if follow-up is truncated). For each form of missing data, it is critical that the mechanisms that gave rise to the missingness be articulated and that the number of participants affected be reported. For example, if a study is terminated early, the rationale for the decision should be explicitly stated and assurances provided that it was made independently of an unscheduled review of unmasked data.
In studies with missing or incomplete data, an inappropriate analysis may result in bias, compromising the generalizability of the results. In addition, there may be a loss of statistical power to detect meaningful intervention effects even if no bias is present. In considering the potential for bias, researchers should reflect on the plausibility of assumptions that underpin the validity of statistical methods that address missing data.4 For example, missing outcome data due to early termination of the study may plausibly be viewed as being missing completely at random (MCAR); intuitively, the missingness is independent of any measured or unmeasured patient characteristic. On the other hand, missing data because of dropout and/or loss to follow-up may be a result of greater difficulty in attending or completing study visits. If the mechanism(s) that give rise to such attrition can be explained through measured patient characteristics, the missingness may be viewed as missing at random (MAR). However, if missingness is affected by patient factors that are unmeasured (including missing outcomes), then the missingness may need to be viewed as missing not at random (MNAR).
At a minimum, researchers should present a table comparing those participants with complete data and those without, stratified by assignment group. If all sources of missing data can plausibly be viewed as being MCAR, then a complete case analysis may not be subject to systematic bias. If missingness is more plausibly viewed as MAR, researchers should use an appropriate statistical analysis method to control for potential bias, such as inverse-probability weighting or multiple imputation. Approaches based on last observation carried forward should not be used.3 It is also important to note that the number of tools for addressing missing data continues to expand, with recent developments including doubly robust methods as well as bayesian nonparametric methods. Finally, if the missingness is plausibly MNAR, then a definitive bias adjustment may not be possible, and researchers should consider using appropriate sensitivity analysis methods.5
Regardless of whether bias adjustments are needed, studies with greater than anticipated attrition or with enrollment terminated prior to targeted end point or date will likely have less power than planned, and due consideration should be given to these issues as limitations in the Discussion section. In addition to reporting the original power calculation, indicating how a change in enrollment affected statistical power can provide helpful context. While the editors understand the challenges faced by investigators, studies reporting negative results must generally still demonstrate that they retained adequate power to detect meaningful effect sizes.
The COVID-19 pandemic has given rise to some notable successes in clinical investigation despite incredibly difficult conditions for research involving human participants. Some of the innovations in research that it necessitated are likely to persist even after the spread of the virus is controlled. The editors remain committed to working with investigators worldwide to continue to publish high-quality clinical investigations that advance medicine and public health, even if those investigations must be modified—thoughtfully and transparently—to address the consequences of the pandemic.
Published: January 14, 2021. doi:10.1001/jamanetworkopen.2020.36155
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Perlis RH et al. JAMA Network Open.
Corresponding Author: Roy H. Perlis, MD, MSc, Massachusetts General Hospital and Harvard Medical School, Simches Research Building, 185 Cambridge St, 6th Floor, Boston, MA 02114 (rperlis@mgh.harvard.edu).
Conflict of Interest Disclosures: Dr Perlis reported receiving personal fees from RID Ventures, Psy Therapeutics, Outermost Therapeutics, Belle Artificial Intelligence, Genomind, and Burrage Capital outside the submitted work. No other disclosures were reported.
4.Little
RJA, Rubin
DB. Statistical Analysis With Missing Data. 3rd ed. John Wiley & Sons; 2019.
5.Daniels
MJ, Hogan
JW.
Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis. CRC Press; 2008. doi:
10.1201/9781420011180