[Skip to Navigation]
Sign In
Invited Commentary
October 26, 2018

The Promise and Pitfalls of Pragmatic Clinical Trials for Improving Health Care Quality

Author Affiliations
  • 1Center for Healthcare Delivery Sciences, Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
JAMA Netw Open. 2018;1(6):e183376. doi:10.1001/jamanetworkopen.2018.3376

Despite practice guidelines for most chronic diseases in the United States, uptake of evidence-based interventions for these conditions into routine practice has been imperfect and slow.1 These know-do gaps represent an important opportunity to substantially improve the effectiveness and efficiency of our health care system. Implementation research seeks to address this problem through the scientific study of methods to promote the appropriate adoption of evidence-based interventions into routine care.2

Of the many ways to generate rigorous evidence to overcome gaps in health care, pragmatic clinical trials are an essential tool. In contrast to typical trials, pragmatic trials are run in real-world settings, test interventions compared with usual care (rather than placebo), and are conducted in a way that seeks to enhance the generalizability of the results that they produce.3 As such, these study designs are highly aligned with the goals of implementation research.

The Pragmatic Explanatory Continuum Indicator Summary 2 (PRECIS-2) tool helps identify 9 trial characteristics that make a trial pragmatic.4 These include broad eligibility criteria, ease of recruitment, a generalizable setting, minimal organization or resources required, flexibility in the delivery of the intervention, flexibility in adherence to the intervention, ease of follow-up, a primary outcome that is relevant to patients, and analyses based on intention-to-treat principles. Although not formally included in the PRECIS-2 tool, pragmatic trials often strive to improve efficiency and lower costs with regard to how participants are identified and how outcomes are evaluated.5

The implementation research trial conducted by Carroll et al6 aimed to increase evidence-based care for chronic kidney disease in primary care. Health care organizations were cluster randomized to receive either electronic health record (EHR)–based clinical decision support alone or clinical decision support plus practice facilitation. The primary outcome was change in estimated glomerular filtration rate over time. Secondary outcomes included change in hemoglobin A1c, avoidance of nonsteroidal anti-inflammatory drugs, use of renin-angiotensin system inhibitors, listing chronic kidney disease as a diagnosis, blood pressure control, and smoking cessation. All outcomes were measured using routinely collected EHR data. The study found that the combination of clinical decision support and practice facilitation lowered the rate at which estimated glomerular filtration rate decreased over time. It also improved hemoglobin A1c level but did not affect other secondary outcomes.

By PRECIS-2 criteria, this trial was very pragmatic. The study setting included an array of primary care practices across the United States. Nearly all patients in each practice with eligible estimated glomerular filtration rate values were included. The study’s recruitment strategy leveraged an existing primary care research network. The trial compared practice facilitation and clinical decision support with clinical decision support alone. While this limits the ability to make inferences about the independent effect of these 2 interventions, it did allow for a valid comparison of the intended intervention with contemporary usual care. In addition, all outcomes were assessed using routinely collected data, and the primary analysis included all eligible patients.

Despite the use of these strategies, this study also faced challenges that are not uncommon with pragmatic trials. Most notably, 28% of the randomized practices dropped out. Conducting a study in a real-world setting depends heavily on the sustained involvement of organizational leadership and local site champions, the manipulation of practice workflows to incorporate study procedures, and, in this trial, the modification of the EHR to provide decision support. Unfortunately, changes in practice ownership and in the EHR system that the practices used caused practices to withdraw from continued participation. Even in situations where there are no major changes to the practice leadership or EHR, trial recruitment can sometimes be slow, or clinicians, who are busy with multiple competing demands, may be difficult to engage.

The dropout of clusters in this study is relevant because it introduced selection bias and covariate imbalance between the treatment groups. For example, smaller, independent practices were more likely to be purchased by larger organizations and thus drop out of the trial. Because these practices were likely to be systematically different from the practices included in the study, this could bias the generalizability of the study results. Further, because dropout rates were differential between the study arms (20% of practices dropped out in the intervention arm compared with 41% in the control arm), this likely contributed to imbalance between the randomized groups. For example, in the final sample, control practices had a higher baseline frequency of diagnoses of chronic kidney disease and diabetes and had greater use of renin-angiotensin system inhibitors.

The threat of imbalance in cluster randomized trials is not uncommon and is typically handled in the study design and execution phases. Early engagement of practice leadership can help facilitate all subsequent steps of the trial. Trials also benefit from local site champions who are engaged with the research question at hand and whose opinions are respected by other clinicians.7 During study design, it can be helpful to understand the research, quality, or management agendas of participating clinics to facilitate collaboration. Randomizing clusters within strata based on characteristics thought to be relevant potential confounders helps ensure that randomized groups are equivalent. Covariate constrained randomization, as was done in this trial, is an alternative and elegant way of balancing on many more potential confounders at the time of randomization. During the intervention period of the trial, feedback from sites early in recruitment can help refine the intervention or its rollout for subsequent clinics. Frequent feedback to study sites on their enrollment and follow-up completion rates can help foster ongoing engagement.

Despite these efforts, if cluster dropout and/or differential enrollment results in the study arms being imbalanced on important covariates, then this can also be addressed, to some extent, in the trial’s analysis phase. This is usually done by using multivariable regression modeling that adjusts for imbalanced baseline covariates. In the trial, Carroll et al6 used propensity score matching. This technique is typically used in observational research where differences in baseline characteristics between study groups are exceptionally common. Needing to use adjustment techniques in the randomized setting, where confounding is intended to be handled by design, is to some extent unfortunate, but also highlights the reality of conducting highly pragmatic trials like this one.

The study by Carroll et al6 is an important example of a pragmatic implementation research trial both because of the success and generalizability of the intervention, and also because it highlights common perils of pragmatic trials. Randomized trials seeking to address implementation gaps must operate in a real-world setting and so must frequently make trade-offs between internal validity and generalizability. It is important to learn from the setbacks encountered by prior trials as we seek to expand this real-world evidence base and narrow the know-do gap.

Back to top
Article Information

Published: October 26, 2018. doi:10.1001/jamanetworkopen.2018.3376

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2018 Haff N et al. JAMA Network Open.

Corresponding Author: Niteesh K. Choudhry, MD, PhD, Brigham and Women’s Hospital, Harvard Medical School, 1620 Tremont St, Ste 3030, Boston, MA 02120 (nkchoudhry@bwh.harvard.edu).

Conflict of Interest Disclosures: None reported.

Fischer  F, Lange  K, Klose  K, Greiner  W, Kraemer  A.  Barriers and strategies in guideline implementation: a scoping review.  Healthcare (Basel). 2016;4(3):4.PubMedGoogle Scholar
Eccles  MP, Armstrong  D, Baker  R,  et al.  An implementation research agenda.  Implement Sci. 2009;4:18. doi:10.1186/1748-5908-4-18PubMedGoogle ScholarCrossref
Ford  I, Norrie  J.  Pragmatic trials.  N Engl J Med. 2016;375(5):454-463. doi:10.1056/NEJMra1510059PubMedGoogle ScholarCrossref
Loudon  K, Treweek  S, Sullivan  F, Donnan  P, Thorpe  KE, Zwarenstein  M.  The PRECIS-2 tool: designing trials that are fit for purpose.  BMJ. 2015;350:h2147. doi:10.1136/bmj.h2147PubMedGoogle ScholarCrossref
Choudhry  NK.  Randomized, controlled trials in health insurance systems.  N Engl J Med. 2017;377(10):957-964. doi:10.1056/NEJMra1510058PubMedGoogle ScholarCrossref
Carroll  JK, Pulver  G, Dickinson  LM,  et al.  Effect of 2 clinical decision support strategies on chronic kidney disease outcomes in primary care: a cluster randomized trial.  JAMA Netw Open. 2018;1(6): e183377. doi:10.1001/jamanetworkopen.2018.3377Google Scholar
Flodgren  G, Parmelli  E, Doumit  G,  et al.  Local opinion leaders: effects on professional practice and health care outcomes.  Cochrane Database Syst Rev. 2011;(8):CD000125. doi:10.1002/14651858.CD000125.pub4PubMedGoogle Scholar