[Skip to Navigation]
Sign In
Invited Commentary
Oncology
October 7, 2021

Strategies to Turn Real-world Data Into Real-world Knowledge

Author Affiliations
  • 1Department of Radiation Oncology, University of California, San Francisco
  • 2Bakar Computational Health Sciences Institute, University of California, San Francisco
JAMA Netw Open. 2021;4(10):e2128045. doi:10.1001/jamanetworkopen.2021.28045

Real-world data (RWD), defined as “data regarding the usage, or the potential benefits or risks, of a drug derived from sources other than randomized clinical trials,”1 have emerged as an important source of clinical information since the 21st Century Cures Act was signed in 2016. Although randomized clinical trials (RCTs) remain the highest standard of clinical evidence, RWD have offered the promise of generating insights from the vast clinical data aggregated in routine care. RWD build on the history of retrospective studies, filling knowledge gaps to supplement RCTs and generating hypotheses for future trials. RWD can address a number of limitations of RCTs, including (1) resource and time intensiveness; (2) issues with external generalizability due to stringent inclusion criteria, narrow practice settings, and patient disparities in access; and (3) insufficient power to detect rare events or study uncommon diseases. RWD have played an important role in US Food and Drug Administration regulatory review but also have serious limitations attributable to bias and data quality. These shortcomings can make it challenging to draw conclusions from comparative effectiveness studies.2

Wilkinson et al3 report their analysis of patients receiving alectinib and ceritinib for non–small cell lung cancer, investigating single-group phase 2 alectinib trials and real-world alectinib and ceritinib populations. Their study3 focuses on evaluating uncertainty when using RWD. The authors should be commended for applying several approaches to characterize and manage the limitations of RWD. This presents an important opportunity to discuss best practices in analyzing RWD.

Many strategies have been critical in the evolution of RWD studies. One important standard that remains underutilized is the target trial framework, which emulates an RCT with observational data.4 This approach includes specifying a time 0 (similar to the randomization time on an RCT) to facilitate assessment of eligibility criteria and appropriate end point definition. Although the term “target trial” is not explicitly stated, Wilkinson and colleagues3 adopt 2 important strategies from this framework: definition of time 0 and assessing eligibility criteria based on this time. Although the authors do their best to approximate the intention-to-treat analysis leveraged in prospective trials, the use of treatment initiation as time 0 differs from the randomized setting, where events between randomization and treatment initiation (such as the development of a contraindication to a specific therapy or mortality) can occur.4 This limitation is less impactful in this study,3 because each comparison group is a systemic agent. In other studies, this can result in selection and immortal time bias (eg, patients who receive adjuvant chemotherapy after surgery must have survived long enough to receive the treatment).4 Where possible, identifying the time at which a physician decided to initiate therapy would more closely approximate the randomization time of an RCT. However, this requires availability and manual review of clinical documentation.

Confounding is an important consideration, as comparisons frequently occur between biased populations. Confounders can impact either or both treatment selection and treatment outcomes. Wilkinson and colleagues3 apply propensity weighting, where the propensity of treatment based on covariates is weighted to create balanced groups. Other strategies include matching, restriction, stratification, and regression; these offer complementary approaches to RWD. There is no single panacea—each approach has strengths and weaknesses5—and transparent reporting and interpretation are critical.

Unfortunately, these strategies are limited by the availability and quality of measured confounders and can be vulnerable to unmeasured confounders.5 Wilkinson et al3 demonstrate important strategies to mitigate 2 primary limitations in their available data. ECOG performance status was missing for 47.3% of patients in the ceritinib RWD group and 34.6% of patients in the alectinib RWD group, and important confounders such as socioeconomic characteristics and prior receipt of nonsystemic therapy (which is relevant given the role of local therapies in stage IIIB and oligometastatic non–small cell lung cancer) were not available.3 To evaluate the potential impact of unmeasured confounders, the authors use quantitative bias analysis, which quantifies the required biasing effect of unmeasured confounders to impact study results. This can provide an understanding of the robustness of results against unmeasured confounders. Best practices in bias analysis, including conservative interpretation because of the underlying assumptions, have been previously described.5,6 In oncology, future studies may also benefit from the integration of multiple data sources, such as oncology information systems and tumor registries, and the applications of computational approaches, such as natural language processing,7 to improve the measurement of confounders.

Data quality and missingness in analyzed covariates are also important challenges. In particular, routine clinical data are frequently acquired or missing because of intentional processes in the health care system (eg, greater frequency of obtaining vital signs for patients with more-acute conditions). This informative missingness can result in information bias. Therefore, limiting analyses to patients with complete data can bias results, and alternative approaches such as imputation can be helpful. Wilkinson and colleagues3 highlight the limitations of missing baseline ECOG performance status and appropriately consider the possibility that performance status was missing from patients in a nonrandom fashion. The use of multiple approaches under varying assumptions to verify their findings strengthens their analyses. Overall, it is important for investigators and readers to understand the strengths and weaknesses of different imputation approaches.8

Differential data acquisition can also create discrepancies between data collected during routine clinical care vs clinical trials. In their study, Wilkinson et al3 compare 2 populations receiving alectinib (patients enrolled in phase 2 trials and real-world patients) with real-world patients receiving ceritinib. Importantly, RWD have played an increasing role in the development of synthetic controls for single-group studies. As the authors describe,3 data harmonization for this use remains an important barrier, and harmonization decisions should be clearly reported. Given this limitation and variations in study populations, the authors’ replication of results using both single-group and real-world populations is important to support confidence in their findings.

RWD and RCTs will continue to serve complementary roles. RWD can efficiently inform the next generation of RCTs and fill knowledge gaps when RCTs are not feasible (whether because of cost, time, or other considerations), while also providing large sample sizes and better external generalizability. RCTs are designed to maximize internal validity and the ability to make causal inferences within the confines of a well-controlled environment.

For clinicians to identify best practices, the challenge will be in synthesizing RCT results and RWD to optimize decisions for individual patients. It has been well documented that the results of observational studies can be challenging to reconcile with those of RCTs.2 Discordance should be anticipated and often relates to the aforementioned challenges. At other times, this may reflect differences with real-world populations and clinical practice. It is important to consider analogs that exist in RCTs. Benefits have been identified in smaller-scale RCTs that are not reproduced when evaluated in the cooperative group setting. Moreover, a proportion of RCTs will have incorrect findings due to chance, which can be identified only when multiple trials address similar hypotheses. Overall, disagreement between RCTs and RWD may further inform the design of future RCTs.

We should look forward to the continued evolution of best practices in extracting, harmonizing, and analyzing RWD. Maximizing our ability draw conclusions from these data and placing them in appropriate context with RCTs will be critical to advance patient care in a timely and resource-efficient manner.

Back to top
Article Information

Published: October 7, 2021. doi:10.1001/jamanetworkopen.2021.28045

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Hong JC. JAMA Network Open.

Corresponding Author: Julian C. Hong, MD, MS, Department of Radiation Oncology, University of California, San Francisco, 1825 Fourth St, Ste L1101, San Francisco, CA 94158 (julian.hong@ucsf.edu).

Conflict of Interest Disclosures: Dr Hong reported being a coinventor on a pending patent that is unrelated to this article. His research has been funded by the American Cancer Society and he is supported by a Career Development Award from Conquer Cancer. No other disclosures were reported.

References
1.
Cornell Law School. 21 U.S. Code § 335g. Utilizing real world evidence. January 6, 2017. Accessed September 8, 2021. https://www.law.cornell.edu/uscode/text/21/355g
2.
Kumar  A, Guss  ZD, Courtney  PT,  et al.  Evaluation of the use of cancer registry data for comparative effectiveness research.   JAMA Netw Open. 2020;3(7):e2011985. doi:10.1001/jamanetworkopen.2020.11985PubMedGoogle Scholar
3.
Wilkinson  S, Gupta  A, Scheuer  N,  et al.  Assessment of alectinib vs ceritinib in ALK-positive non–small cell lung cancer in phase 2 trials and in real-world data.   JAMA Netw Open. 2021;4(10):e2126303. doi:10.1001/jamanetworkopen.2021.26306Google Scholar
4.
Hernán  MA, Robins  JM.  Using big data to emulate a target trial when a randomized trial is not available.   Am J Epidemiol. 2016;183(8):758-764. doi:10.1093/aje/kwv254PubMedGoogle ScholarCrossref
5.
Levenson  M, He  W, Chen  J,  et al  Biostatistical considerations when using RWD and RWE in clinical studies for regulatory purposes: a landscape assessment.   Stat Biopharm Res. Published online March 10, 2021. doi:10.1080/19466315.2021.1883473Google Scholar
6.
Lash  TL, Fox  MP, MacLehose  RF, Maldonado  G, McCandless  LC, Greenland  S.  Good practices for quantitative bias analysis.   Int J Epidemiol. 2014;43(6):1969-1985. doi:10.1093/ije/dyu149PubMedGoogle ScholarCrossref
7.
Hong  JC, Fairchild  AT, Tanksley  JP, Palta  M, Tenenbaum  JD.  Natural language processing for abstraction of cancer treatment toxicities: accuracy versus human experts.   JAMIA Open. 2020;3(4):513-517. doi:10.1093/jamiaopen/ooaa064PubMedGoogle ScholarCrossref
8.
Sterne  JAC, White  IR, Carlin  JB,  et al.  Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls.   BMJ. 2009;338:b2393. doi:10.1136/bmj.b2393PubMedGoogle ScholarCrossref
×