[Skip to Navigation]
Sign In
Invited Commentary
Ethics
January 25, 2019

Deception and Study Participation—Unintended Influences and Ramifications for Clinical Trials

Author Affiliations
  • 1Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
  • 2Ammon-Pinizzotto Center for Women’s Mental Health, Massachusetts General Hospital, Boston
  • 3Clinical Trials Network and Institute, Massachusetts General Hospital, Boston
JAMA Netw Open. 2019;2(1):e187359. doi:10.1001/jamanetworkopen.2018.7359

Numerous variables challenge our ability to derive clinically meaningful data from treatment research. The stakes are high that clinical research findings are accurate and applicable to the real world, such that the results lead to improvements in patient care. Less than that, patient safety may be compromised, effective treatments may be prematurely condemned, and, not unimportantly, tremendous resources squandered in the conduct of clinical trials with unintelligible results. Participation in clinical trials does not precisely, or sometimes even remotely, reflect real-world patient care, and this may start with initial assessments of how patients first gain entry into trials with strict inclusion and exclusion criteria. Patient remuneration may further contribute to differences in study participants compared with patients in a nonresearch setting, and this topic deserves further understanding.

Fernandez Lynch and colleagues1 report on a study done to assess the association between financial incentives and patient deception regarding eligibility for an online survey. They conducted a study in collaboration with an online survey company with access to nationally representative samples. Participants were randomized into 7 groups. The control condition was group 1 in which participants were told they were eligible to participate if they had ever received any vaccine in their lifetime, described by the authors as “intended to achieve near-universal eligibility and avoid any incentive to provide either affirmative or negative answers to a subsequent question specifically about influenza vaccination status.”1 In the 6 experimental groups, participants were randomized to interventions that included “direction of the self-reported eligibility criterion” (whether influenza vaccination in the past 6 months rendered the patient eligible or ineligible) and amount of incentive provided for eligibility. Amounts of incentives were $5, $10, and $20. Participants in group 1 received $5 each. Following the eligibility question about whether or not the patient had received the vaccine within the past 6 months, questions followed pertaining to attitudes about vaccination. The overall survey response rate was 59.4%. Group 1 reported having received a vaccine at a rate of 52.2% over the past 6 months; among the other groups, the rate of those reporting vaccination within the past 6 months was higher among those who were incentivized to self-report and lower among those incentivized to self-report no vaccine (either over or under the control condition, respectively, by 16.6% [for those receiving $5], 21.0% [for those receiving $10], and 15.4% [for those receiving $20], representing rates of deception). The results did not significantly differ by amount of reimbursement.

The investigators discuss the strengths and limitations of their study, including the limitations that their findings pertain to participation in a survey study and that there is a need to replicate the findings in clinical trials. Indeed, as they discussed, the clinical trial context is also more complex because “payment amounts and other benefits—such as access to potentially desirable investigational medicines—are often more substantial than typically associated with survey research.”1

The findings underscore the importance of quality assurance measures in study recruitment and eligibility assessment when clinical trials have serious repercussions for treatment development and may ultimately influence patient care. Such a rate of deception regarding eligibility assessment among healthy volunteers participating in a survey raises concern for more complex clinical research scenarios in which there are additive variables for deception, including (1) how deception might be greater when individuals with illnesses are pursuing treatment studies, in contrast to healthy controls, and (2) how concurrent investigator incentives might further shape study participant behavior in deception.

Other than direct participant remuneration, there are other incentives for participants with a disease condition to enter a clinical trial. Receipt of treatment and clinical attention may disproportionately hold sway over those who lack health insurance or are underinsured or lack access to care in other ways or those who find participation in a clinical trial less stigmatizing than seeking treatment through standard care.

Participant deception is not the only deception that must be guarded against in participant enrollment to ensure the scientific integrity and safety of clinical trials. Study participants are not the only ones who gain by their enrollment. Study personnel also have incentives to enroll participants and may consciously or unconsciously engage in deception. Whether there is a financial incentive for enrollment or not for sites, adequate numbers of participants are required for a trial to progress, and there are all sorts of pressures to enroll, often with pressing timelines to do so.2,3 For industry-sponsored trials, payment to research sites is usually strongly tied to enrollment and retention of participants, with quantity of participants enrolled rewarded. Rigor of adherence to inclusion and exclusion criteria in enrollment and attempt to weed out potentially deceptive participants are rarely rewarded.

One method used to reduce the enrollment of inappropriate study participants is remote eligibility interviews by trained interviewers who are knowledgeable about the disease state and how it manifests under real-world circumstances. Colleagues and I previously analyzed data from outcomes of remote assessments after on-site eligibility determination for treatment studies in major depressive disorder.4 Of those who were deemed eligible after intake assessments at research sites, remote centralized structured interviews yielded a rate of 15.5% noneligibility after a remote expert assessment. Reasons for ineligibility included not meeting specified severity of illness for inclusion, not having the treatment history specified for participation, or other diagnostic criteria. We found that US research sites had higher rates of failure of remote eligibility interviews than non-US sites, while academic and nonacademic sites had similar failure rates. While we can only speculate on why these differences might exist, there has been an emergence of study participants who are guided by financial gain, may be nonadherent to study protocols, and may exhibit noninformative responses regarding potential treatments in development.5

The prevalence of deception observed by Fernandez Lynch and colleagues1 was almost exactly the same as what we found for noneligibility determinations of remote assessments of study participants deemed eligible at research sites, demonstrating consistency regarding the significant minority of participants who may be inappropriately enrolled into trials and thus might render a good deal of the outcome data meaningless or at least substantially flawed. Lack of precision in clinical trial enrollment may hinder the ability to detect signals of efficacy between interventions and placebo arms, a major problem in the development of novel treatments.6,7

Ideally, clinical trials represent real-world conditions accurately, and the participants resemble the patient populations for whom treatments are targeted. The many variables that might be imprecisely assessed in participant recruitment include the disease state or diagnosis, its severity, protocol treatment adherence, and validity of outcomes assessments. To be meaningful, studies must adequately capture the intended patient population who will adhere to the study protocol and participate in the assessment of clinically meaningful outcomes without undue influence of external factors.

In summary, Fernandez Lynch and colleagues1 shed light on a critically important aspect of study enrollment, patient deception. Future research into patient deception in clinical trials is imperative to help focus research resources and hone the development of new and novel treatments.

Back to top
Article Information

Published: January 25, 2019. doi:10.1001/jamanetworkopen.2018.7359

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Freeman MP. JAMA Network Open.

Corresponding Author: Marlene P. Freeman, MD, Ammon-Pinizzotto Center for Women’s Mental Health, Massachusetts General Hospital, 185 Cambridge St, Second Floor, Boston, MA 02114 (mfreeman@partners.org).

Conflict of Interest Disclosures: Dr Freeman reported consulting for Otsuka and Alkermes; reported medical editing of the Global Organization for EPA and DHA Omega-3s (GOED) newsletter; reported serving as editor in chief of the Journal of Clinical Psychiatry; reported conducting investigator-initiated trials and research for Takeda, JayMac, and Sage; reported serving on advisory boards for Otsuka, Alkermes, Janssen (Johnson & Johnson), Sage, and Sunovion; reported serving on an independent data safety and monitoring committee for Janssen (Johnson & Johnson); reported being an employee of Massachusetts General Hospital (MGH) and working with the MGH National Pregnancy Registry (current registry sponsors are Teva, Alkermes, Otsuka, Forest/Actavis, and Sunovion); and reported being an employee of MGH working with the MGH Clinical Trials Network and Institute, which has had research funding from multiple pharmaceutical companies and the National Institute of Mental Health.

References
1.
Fernandez Lynch  H, Joffe  S, Thirumurthy  H, Xie  D, Largent  EA.  Association between financial incentives and participant deception about study eligibility.  JAMA Netw Open. 2019;2(1):e187355. doi:10.1001/jamanetworkopen.2018.7355Google ScholarCrossref
2.
Hall  MA, Friedman  JY, King  NM, Weinfurt  KP, Schulman  KA, Sugarman  J.  Commentary: per capita payments in clinical trials: reasonable costs versus bounty hunting.  Acad Med. 2010;85(10):1554-1556. doi:10.1097/ACM.0b013e3181ef9cc6PubMedGoogle ScholarCrossref
3.
Puttagunta  PS, Caulfield  TA, Griener  G.  Conflict of interest in clinical research: direct payment to the investigators for finding human subjects and health information.  Health Law Rev. 2002;10(2):30-32.PubMedGoogle Scholar
4.
Freeman  MP, Pooley  J, Flynn  MJ,  et al.  Guarding the gate: remote structured assessments to enhance enrollment precision in depression trials.  J Clin Psychopharmacol. 2017;37(2):176-181. doi:10.1097/JCP.0000000000000669PubMedGoogle ScholarCrossref
5.
McCann  DJ, Petry  NM, Bresell  A, Isacsson  E, Wilson  E, Alexander  RC.  Medication nonadherence, “professional subjects,” and apparent placebo responders: overlapping challenges for medications development.  J Clin Psychopharmacol. 2015;35(5):566-573. doi:10.1097/JCP.0000000000000372PubMedGoogle ScholarCrossref
6.
Montgomery  SA.  The failure of placebo-controlled studies: ECNP Consensus Meeting, September 13, 1997, Vienna, European College of Neuropsychopharmacology.  Eur Neuropsychopharmacol. 1999;9(3):271-276. doi:10.1016/S0924-977X(98)00050-9PubMedGoogle ScholarCrossref
7.
Iovieno  N, Papakostas  GI.  Correlation between different levels of placebo response rate and clinical trial outcome in major depressive disorder: a meta-analysis.  J Clin Psychiatry. 2012;73(10):1300-1306. doi:10.4088/JCP.11r07485PubMedGoogle ScholarCrossref
×