FDA indicates US Food and Drug Administration.
Forest plot of effect size (ES) values (Hedges g) for PD shows separate results according to the US Food and Drug Administration (FDA) and according to the scientific literature. Weights are from random-effects analysis.
Forest plot of effect size (ES) values (Hedges g) for OCD shows separate results according to the US Food and Drug Administration (FDA) and according to the scientific literature. Weights are from random-effects analysis.
eTable 1. Number of Trials Reviewed by the FDA for Approved SSRIs and SNRIs
eTable 2. Spin: Conclusion FDA Review Versus Conclusion in Matching Journal Article
eFigure 1. Flow Diagram
eFigure 2. Forest Plots for GAD, SAD, and PTSD
Roest AM, de Jonge P, Williams CD, de Vries YA, Schoevers RA, Turner EH. Reporting Bias in Clinical Trials Investigating the Efficacy of Second-Generation Antidepressants in the Treatment of Anxiety DisordersA Report of 2 Meta-analyses. JAMA Psychiatry. 2015;72(5):500-510. doi:10.1001/jamapsychiatry.2015.15
Studies have shown that the scientific literature has overestimated the efficacy of antidepressants for depression, but other indications for these drugs have not been considered.
To examine reporting biases in double-blind, placebo-controlled trials on the pharmacologic treatment of anxiety disorders and quantify the extent to which these biases inflate estimates of drug efficacy.
Data Sources and Study Selection
We included reviews obtained from the US Food and Drug Administration (FDA) for premarketing trials of 9 second-generation antidepressants in the treatment of anxiety disorders. A systematic search for matching publications (until December 19, 2012) was performed using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials.
Data Extraction and Synthesis
Double data extraction was performed for the FDA reviews and the journal articles. The Hedges g value was calculated as the measure of effect size.
Main Outcomes and Measures
Reporting bias was examined and classified as study publication bias, outcome reporting bias, or spin (abstract conclusion not consistent with published results on primary end point). Separate meta-analyses were conducted for the 2 sources, and the effect of publication status on the effect estimates was examined using meta-regression.
The findings of 41 of the 57 trials (72%) were positive according to the FDA, but 43 of the 45 published article conclusions (96%) were positive (P < .001). Trials that the FDA determined as positive were 5 times more likely to be published in agreement with that determination compared with trials determined as not positive (risk ratio, 5.20; 95% CI, 1.87 to 14.45; P < .001). We found evidence for study publication bias (P < .001), outcome reporting bias (P = .02), and spin (P = .02). The pooled effect size based on the published literature (Hedges g, 0.38; 95% CI, 0.33 to 0.42; P < .001) was 15% higher than the effect size based on the FDA data (Hedges g, 0.33; 95% CI, 0.29 to 0.38; P < .001), but this difference was not statistically significant (β = 0.04; 95% CI, –0.02 to 0.10; P = .18).
Conclusions and Relevance
Various reporting biases were present for trials on the efficacy of FDA-approved second-generation antidepressants for anxiety disorders. Although these biases did not significantly inflate estimates of drug efficacy, reporting biases led to significant increases in the number of positive findings in the literature.
There is strong evidence1 that significant results from randomized clinical trials are more likely to be published than are nonsignificant results. As a consequence, published studies, including meta-analyses, may overestimate the benefits of treatments while underestimating their harms, thus misinforming physicians, policy makers, and patients.2
Different types of reporting biases can be present. Study publication bias occurs when studies with positive results are more likely to be published than studies with negative results.3Outcome reporting bias involves publishing outcomes from a study that are “positive” (eg, statistically significant) without publishing “negative” outcomes or switching the status of primary and secondary outcomes based on results.4 Finally, spin occurs when treatments are described by investigators as beneficial, even though published results for primary outcomes are nonsignificant.5
The registry and results database of the US Food and Drug Administration (FDA) can be used to assess the degree to which published trial results may overestimate efficacy.6- 8 Pharmaceutical companies must register all trials they intend to use in support of an application for US marketing approval with the FDA; information on these trials is compiled in this database. A previous study6 found that 51% of the trials on antidepressants used in the treatment of major depressive disorder were deemed positive by the FDA compared with 94% of those in the literature; in addition, a meta-analysis6 of only published data overestimated the effect of antidepressants by 32%. This finding was followed by debate and additional research on the efficacy of antidepressants for depression.2,9,10
Antidepressants are widely prescribed for conditions other than depression.11 However, research on reporting biases for these other indications is lacking. Anxiety disorders are common in the general population, with an estimated year prevalence of 12%.12 Second-generation antidepressant drugs, namely selective serotonin reuptake inhibitors and serotonin norepinephrine reuptake inhibitors, are the primary pharmacologic treatments for generalized anxiety disorder (GAD),13,14 panic disorder (PD),14,15 social anxiety disorder (SAD),16 posttraumatic stress disorder (PTSD),17and obsessive-compulsive disorder (OCD).18 Several meta-analyses have reported that second-generation antidepressants are superior to placebo in the treatment of GAD,19 PD,20,21 SAD,22 PTSD,23 and OCD.24 Some of these meta-analyses20,22 suggested the existence of study publication bias based on funnel plot asymmetry. However, such methods cannot prove the existence of publication bias; for that, one must access and analyze unpublished data as well.7 A recent study25 examined the efficacy of one selective serotonin reuptake inhibitor in the treatment of GAD and PD using a complete data set of trials sponsored by the manufacturer. This study showed that published trials had significantly larger effect sizes than did the unpublished trials.
In the present study, the first objective was to examine reporting bias in the scientific literature on the efficacy of second-generation antidepressants approved by the FDA for the treatment of anxiety disorders. By comparing published articles with corresponding FDA reviews, we examined the presence of study publication bias, outcome reporting bias, and spin. The second objective was to compare the magnitude of the overall effect based on published trial data from premarketing trials with the effect based on the full cohort of such trials registered with the FDA.
We began by identifying the inception cohort of premarketing trials for the indications of interest, then conducted a literature search for those trials. This process was similar to that of other studies.6,7
We identified the phase 2 and 3 clinical, double-blind, placebo-controlled trials registered with the FDA and conducted in pursuit of marketing approval of second-generation antidepressants for the treatment of GAD, PD, SAD, PTSD, and OCD. Nine drugs approved by the FDA for these indications were examined: 7 selective serotonin reuptake inhibitors (paroxetine hydrochloride, paroxetine controlled release [CR], sertraline hydrochloride, fluoxetine hydrochloride, fluvoxamine maleate, fluvoxamine CR, and escitalopram oxalate) and 2 serotonin norepinephrine reuptake inhibitors (venlafaxine hydrochloride extended release [ER] and duloxetine hydrochloride). We retrieved the FDA Drug Approval Packages (hereafter termed FDA reviews) from the FDA’s website (http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm); if these packages were not available for download, we requested them from the FDA’s Freedom of Information Office (http://www.accessdata.fda.gov/scripts/foi/FOIRequest/requestinfo.cfm). We extracted the results the FDA used to decide whether the trial was positive (ie, whether it could be used to support marketing approval). Data were extracted preferably from the statistical review but also from the medical review and administrative correspondence (eg, memos by team leaders). In cases in which multiple primary end points were identified in a trial, results were extracted for the end point that was most consistent with the primary end point identified in other trials for the same indication. In accordance with previous publications,6,7 the FDA’s regulatory decisions were classified as (1) positive (clearly supporting efficacy) or (2) not positive, with the latter including both questionable (neither clearly positive nor clearly negative) and negative (not supportive of efficacy) trials. The questionable category included trials characterized by the FDA as “marginally” or “borderline” positive. These trials had nonsignificant P values for 1 or more of the primary end points but were considered by the FDA to be supportive of other positive trials because of significant findings on secondary variables. The questionable category also included “failed” trials (in which neither the study drug nor the active comparator demonstrated statistical superiority to placebo). For multiple-dose trials, we used the FDA’s overall decision. For purposes of meta-analysis, we extracted data only for approved dosages, thus excluding subtherapeutic dosages.6 Data extraction, classification, and entry were performed independently by 2 investigators (A.M.R. and C.D.W.) with discrepancies resolved by consensus (A.M.R., C.D.W., and E.H.T.).
Having identified the inception cohort of premarketing trials registered with the FDA, we systematically searched for matching publications using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials without language restrictions, with a search cutoff date of December 19, 2012. We searched the title field for the name of the drug and the type of anxiety disorder and any field for the word placebo. For example, when searching PubMed for relevant escitalopram trials for GAD, the search syntax was escitalopram [title] and (generalized [title] or generalised [title]) and anxiety [title] and disorder [title] and placebo. Publication matches for trials registered with the FDA were identified using the following information: drug name, name of the active comparator (if applicable), dosage groups, sample sizes, trial duration, and names of the investigators. Stand-alone publications (ie, a full-length article devoted to reporting the results of a single trial) were preferred. If no stand-alone publication could be found, pooled analyses were sought in which multiple trials were addressed in a single article. Data from journal articles that pooled data from multiple trials that were not identical in design according to the FDA were excluded from the present study. Pooled-trials publications were also excluded when 1 or more of the included trials had been published earlier as a stand-alone publication and the pooled-trials publication did not present separate results for the included trials. Finally, data published only in abstract form were excluded.
Several steps were taken to minimize the possibility that we missed matching publications. If no publication was found via the electronic database search, PubMed was used to identify the 3 most recent review articles focusing on the efficacy of the trial drug for the condition treated in the trial. The reference lists for those publications were hand searched. In addition, the drug sponsor’s website was searched for bibliographic information on the trials in question.
To assess drug efficacy according to published journal articles, we used the primary end point specified in the publication. If a primary end point was not specified or no end point was clearly emphasized, we extracted the drug-placebo comparison reported first in the text of the results section or in the Table or Figure first cited in the text.7 If multiple end points were identified as primary in a single study, results were extracted for the end point reviewed as primary by the FDA. Data extraction and entry were done independently (A.M.R. and R.A.S.) with discrepancies resolved through consensus (A.M.R., Y.A.d.V., and R.A.S.).
In addition, each article’s conclusion was classified as positive or not positive (including questionable and negative) based on the sentence in the abstract reporting the authors' overall conclusion regarding study outcome. Conclusions were classified independently by 2 authors (A.M.R. and P.d.J.); one (P.d.J.) was blinded to the results of the FDA review.
This study was approved by the research and development committee of the Portland Veterans Affairs Medical Center. Because of the nature of the study, informed consent from individual participants was not required.
The binomial probability test was used to assess whether the proportion of positive conclusions in journal articles was significantly different from the proportion of positive trials according to the FDA. In addition, using the Fisher exact test, we examined whether not-positive trials (according to the FDA) were more likely to be unpublished, or published in a positive manner, compared with positive trials. The presence of study publication bias (trial results not published), outcome reporting bias (changes in analysis or primary end point affecting the significance of findings), and spin (abstract conclusion not consistent with published results on primary end point) were also compared for positive and not-positive trials.
We conducted 2 meta-analyses: one using data from the FDA reviews and another using the corresponding published data.6,7 The Hedges g value was used as a measure of effect size and was calculated with the following equation in which t represents the t test statistic and n1 and n2 are the numbers of study participants in the drug and placebo groups, respectively:
The values for g were adjusted using Hedges correction for small sample size.6,7 The t statistic was calculated from the precise P value and the trial sample size using Microsoft Excel’s TINV function (Microsoft Corporation), multiplying t by −1 when the study drug was inferior to placebo. If a precise P value was not available because it was reported as a range (eg, P < .05), the t statistic was calculated from other summary statistics (ie, SDs, SEs, and 95% CIs around the mean difference). When the data were presented as dichotomized statistics, the Hedges g value was calculated from χ2 analysis.26 If none of these data were available and FDA and journal data were otherwise congruent, data were imputed with information extracted from the other source. In addition, for 2 journal articles,27,28 the Hedges g value was calculated from the F statistic (analysis of variance).26 Finally, P values and other efficacy data were not reported for 2 negative FDA trials that were, in one case, not published and, in the other case, published as positive (trial 95-003: sertraline for SAD and trial 495: sertraline for OCD). These P values were imputed with P = .396, which was derived from 16 nonsignificant but precise P values according to the method described by Turner et al6 in the appendix of their article. For each multiple-dose study, we computed a single study-level effect size using a fixed-effects model to pool the values from that trial’s multiple treatment arms. When calculating the SE, we counted each trial’s shared placebo n once, rather than redundantly, for each dose group to avoid a spuriously low SE. A limitation of this method is that it only partially addresses error due to correlation between the comparisons.3 Calculations of all effect sizes were performed independently by 2 authors (A.M.R. and Y.A.d.V).
The random-effects pooling method was used to generate summary estimates of the Hedges g, and I2 and 95% CIs around I2 were calculated to assess heterogeneity.29 The I2 value reflects the proportion of total variance explained by heterogeneity. Meta-regression, using the restricted maximum likelihood method, was conducted to examine the impact of publication status on the effect estimates. In addition, prespecified subgroup analyses were performed for each anxiety disorder. All statistical analyses were performed using Stata, version 11.0 (StataCorp).
We analyzed 9 second-generation antidepressants for data related to the 5 anxiety disorders. Within those 45 possible drug indication combinations, 21 are FDA approved. Of those combinations, we were able to download 9 FDA approval packages through the FDA website; for the remaining 12 combinations, we made requests to the FDA Freedom of Information Office. Of these, the FDA Freedom of Information Office fulfilled 11 requests (the FDA informed us that the drug approval package for fluoxetine for panic disorder would not be available for at least 18 months). Twenty approval packages were available for the present study (eTable 1 in the Supplement). These drug approval packages, which were issued between 1994 and 2008, reviewed the results of 57 randomized, placebo-controlled short-term trials.
For the 57 above-mentioned FDA-registered trials, we identified 48 publications presenting the results of 52 trials. Three of these articles were excluded from further analyses.30- 32As a result, 3 additional trials (trials: 637 [paroxetine for GAD], 627 [paroxetine for PTSD], and 514 [sertraline for PD]) were judged to be not fully published. Two articles pooled trials that were not identical in design,30,31 and another pooled-trials article failed to present separate results for the included trials32 and included a trial that was previously published as a stand-alone publication.33 A flow diagram of this process is shown in eFigure 1 in the Supplement, and characteristics of the included trials are presented in Table 1.
The proportion of positive findings was 72% (41 of 57) according to the FDA vs 96% (43 of 45) according to the published literature. This difference was statistically significant (binomial test, P < .001).
Of the 41 positive trials, 40 trials (98%) were published in agreement with the FDA. By contrast, of the 16 not-positive trials, only 3 (19%) were published in agreement with the FDA (Figure 1). This difference was statistically significant (Fisher exact test, P < .001). Overall, trials that the FDA judged as positive were 5 times more likely to be published in agreement with that decision than were FDA-determined not-positive trials (risk ratio, 5.20; 95% CI, 1.87-14.45; P < .001).
Sixteen of the 57 trials (28%) were not positive according to the FDA. Seven of these 16 not-positive trials (44%) were not published, but only 1 of the 41 positive trials (2%) was not published (Table 1). This difference was statistically significant (Fisher exact test, P < .001).
For 3 of the 16 not-positive trials (19%),33,43,61 results were published with a conclusion that conflicted with that in the FDA review, changing the effects from nonsignificant to statistically significant. By contrast, outcome reporting bias was found in none of the 41 FDA-positive trials (Table 1). The difference in proportions was statistically significant (Fisher exact test, P = .02).
One of the 3 above-mentioned publications (trial 120 [paroxetine for PD])43 presented only observed-cases analyses for the primary outcome; according to the FDA, the primary analysis involved last-observation-carried-forward analyses, the results of which were not statistically significant. In the article33 presenting the results of trial 529, data from participants with PD who were randomized to different dosages of sertraline were pooled and compared with the placebo group, yielding a significant result. The FDA review showed that the primary results for each of the dosage groups were nonsignificant. Finally, one article61 presenting the results of trial 95-003, which compared the effect of sertraline (with and without exposure therapy) with that of placebo (with and without exposure therapy) in patients with SAD, combined scores on 3 end points (disorder-specific Clinical Global Impression Scale [severity and improvement] and Social Phobia Scale) in response vs nonresponse categories. According to the FDA, the primary end point was the severity total score of the disorder-specific Clinical Global Impression Scale, and results for this end point were nonsignificant.
Spin was present in an additional 3 of 16 (19%) of the not-positive trials52,70,74 and not present for positive trials (Fisher exact test, P = .02) (Table 1). Each of these 3 articles reported that the primary end point was nonsignificant in the results section but, in the abstract, concluded that the trial was positive. The FDA classified these trials as questionable (trial 237/248: sertraline for OCD74) or negative (trial E079: fluoxetine for OCD70 and trial 391: venlafaxine ER for PD52). Conclusions on study drug efficacy for these trials, according to the FDA and the authors of the journal articles, are included in eTable 2 in the Supplement.
The pooled effect size based on the FDA data was 0.33 (95% CI, 0.29-0.38; P < .001). Heterogeneity was moderate (I2 = 39%; 95% CI, 15%-56%). For trials published in agreement with the FDA review results, the pooled effect size (Hedges g = 0.38; 95% CI, 0.34-0.42; P < .001) was larger than the pooled effect size of trials that were not published or published in disagreement with the FDA conclusion (Hedges g = 0.17; 95% CI, 0.09-0.26; P < .001). Meta-regression showed this difference to be statistically significant (β = 0.21; 95% CI, 0.12-0.30; t = 4.61; P < .001).
The pooled effect size based on the literature was 0.38 (95% CI, 0.33 to 0.42; P < .001). Heterogeneity was low (I2 = 30%; 95% CI, 0% to 51%). This effect size represented a 15% increase in effect size compared with the value based on the FDA data. This difference was not statistically significant by meta-regression (β = 0.04; 95% CI, −0.02 to 0.10; t = 1.36; P = .18).
Effect sizes based on data from the FDA reviews were 0.32 for GAD, 0.28 for PD, 0.27 for PTSD, and 0.39 for both OCD and SAD. For all disorders, the pooled effect sizes of trials published in agreement with the FDA review results were larger than the pooled effect sizes of trials that were not published or were published in disagreement with the FDA conclusion (Table 2). As a result, forest plots for all disorders showed fewer nonsignificant trials according to the literature than according to the FDA, especially for PD and OCD (Figure 2 and Figure 3; see eFigure 2 in the Supplement for GAD, SAD, and PTSD).
Effect sizes based on the literature were larger for all disorders compared with effect sizes based on the FDA reviews, with the smallest increases for GAD (Hedges g = 0.34, 6% increase) and SAD (Hedges g = 0.42, 8% increase) and larger increases for OCD (Hedges g = 0.45, 15% increase), PTSD (Hedges g = 0.32, 19% increase), and PD (Hedges g = 0.35, 25% increase). However, the differences in effect estimates based on the journal articles and the FDA reviews were not statistically significant for any of the individual disorders (Table 2).
The present study shows the presence of reporting bias in randomized clinical trials on the efficacy of second-generation antidepressants for anxiety disorders. Trials that the FDA judged to be positive were greater than 5 times more likely to be published in agreement with the FDA analysis than were not-positive trials. As a result, 96% of the journal articles (43 of 45) were framed positively, but only 72% of the trials (41 of 57) were deemed positive by the FDA. All reporting biases examined (ie, study publication bias, outcome reporting bias, and spin) were present among the included trials.
In a previous study6 that examined reporting bias in trials on second-generation antidepressants for major depressive disorder, the overall effect size based on the FDA data was 0.31, which is comparable to the effect size of 0.33 found in the present study. After conducting 2 meta-analyses, one based on data from the FDA reviews and the other based on data from the corresponding journal articles, we found that reporting bias inflated the apparent effect size by 15%. This increase was not statistically significant, in contrast to the larger inflation factor (32%) found earlier with major depressive disorder.6 For the individual anxiety disorders, the inflation factors ranged from 6% (GAD) to 25% (PD), indicating the importance of using unbiased data in meta-analyses on the efficacy of second-generation antidepressants for treatment of anxiety disorders.
In the main analyses we combined 5 disorders classified as anxiety disorders in the DSM-IV; however, in the DSM-5, OCD is now classified under obsessive-compulsive and related disorders, and PTSD is under trauma- and stressor-related disorders. Therefore, with the recent change in taxonomy, our grouping of these disorders could be viewed as a limitation, although the efficacy of the drugs was comparable across disorders. A clearer limitation of the present study is the small number of trials for the individual anxiety disorders, decreasing the power of the subgroup analyses. An additional limitation is that we did not examine biased reporting of harm outcomes, which figures into the overall risk-benefit ratio of a drug, but such an examination would have been beyond the scope of the present study. Certain data available only in pooled-trials publications were classified as unpublished, which could also be viewed as a limitation. However, a study76 of antidepressant trials submitted to the Swedish drug regulatory authority showed that positive trials were more likely to be presented as stand-alone publications, and negative trials tended to be reported only within pooled-trials publications. Pooled analyses may not follow the predetermined analysis plan and power calculation and can therefore yield different conclusions than the original trials. Pooled analyses are also associated with “salami slicing” (publishing similar results from one study in multiple publications).77 Therefore, although these publications may provide new information, especially on subgroups and secondary end points, they are susceptible to bias.78 This bias can be reduced by first publishing the original trial results. Future research could assess the bias that is introduced by pooled-trials publications. Finally, we did not contact drug sponsors to ask whether specific trials were published in the scientific literature, so there is a small chance that trials could have been misclassified as unpublished. However, considering the extensive literature search methods, it seems unlikely that such publications would be discoverable by the typical health care professional.
A strength of this study is that, for 20 of the 21 FDA-approved drug-indication combinations, we were able to include data from all premarketing randomized clinical trials, thereby allowing a reliable assessment of different reporting biases for these trials. However, we could not include data from rejected drug-indication applications because the FDA does not release these reviews.79 It is likely that the amount of reporting bias that was found would increase if these trials were also examined. In addition, our estimation of the amount of reporting bias present might be influenced by the fact that all trials were sponsored by pharmaceutical companies. Yet reporting bias is not restricted to pharmacologic treatments sponsored by drug companies.79 Since reporting bias has been shown80 for the treatment of depression with psychotherapy, it should be worthwhile to systematically assess reporting bias in trials using psychotherapy for anxiety disorders.
Spin can result from different intentional or unintentional strategies, for example, by focusing on secondary end points for which significant results were obtained.81 Journal articles for which spin was identified also often reported “marginally significant results” (P values between .05 and .10) in the present study. Ideally, interpretation of trial results should not be based solely on a P value indicating whether results are statistically significant.5 In addition to providing P values, future research could consider including Bayes factors as a measure of the strength of the evidence. Bayes factors stem from Bayesian statistics and have the advantage of expressing the strength of the evidence on a continuous scale.82
Reporting bias significantly increased the number of positive vs negative publications in the literature in the present study. This outcome likely affects physicians’ perceptions of the efficacy of these drugs, which could reasonably be expected to affect prescribing behavior. In both Europe and the United States, use of antidepressants has been rising markedly during the past 2 decades, with much of that use appearing to be driven by nonspecialists in primary care settings.83,84 Although these studies could not take into account the indications for which the drugs were prescribed, a realistic view of the efficacy of these agents is important across all indications.83,84 Results of the present study and additional studies6,25 comparing published results with data from FDA reviews or other registries can perhaps assist physicians in gaining a more realistic view of the evidence for the efficacy of antidepressants in the short-term treatment of affective disorders.
This study adds to the growing body of literature establishing the pervasiveness of reporting bias.79,85 It also highlights the need to address this problem using various measures, as recently reviewed.86 One suggested approach, which would address outcome reporting bias and spin (but not study publication bias), would require peer reviewers to make preliminary decisions based on the strength of the methods in the original trial protocol79 so that their decisions are not influenced by the statistical significance of the study results.87 Use of study registries, such as ClinicalTrials.gov, can also reduce reporting bias in the scientific literature,79 but this registry does not function optimally. For example, for most trials subjected to mandatory reporting within 1 year following trial completion, the results were not posted within this timeframe.88
Although most trials on the efficacy of FDA-approved second-generation antidepressants for anxiety disorders evaluated in this meta-analysis were positive, various reporting biases were present. These reporting biases led to an overly positive representation of significant findings in the scientific literature.
Submitted for Publication: August 11, 2014; final revision received November 4, 2014; accepted December 12, 2014.
Corresponding Author: Annelieke M. Roest, PhD, Department of Psychiatry, University Medical Center Groningen, Hanzeplein 1, 9713 GZ Groningen, the Netherlands (firstname.lastname@example.org).
Published Online: March 25, 2015. doi:10.1001/jamapsychiatry.2015.15.
Author Contributions: Dr Roest had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Roest, de Jonge, Schoevers, Turner.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Roest, de Jonge.
Critical revision of the manuscript for important intellectual content: de Jonge, Williams, de Vries, Schoevers, Turner.
Statistical analysis: Roest, Schoevers, Turner.
Obtained funding: Roest, de Jonge.
Administrative, technical, or material support: de Jonge, Williams, de Vries, Turner.
Study supervision: de Jonge, Williams, Schoevers, Turner.
Conflict of Interest Disclosures: Dr Schoevers received an unrestricted research grant as a coapplicant from Wyeth Pharmaceuticals, the Netherlands (2006) for a study comparing 2 forms of psychotherapy for major depressive disorder. From 1998 to 2001, Dr Turner served as a medical reviewer at the US Food and Drug Administration (FDA). From 2001 to 2005, Dr Turner provided outside consulting to Bristol-Myers Squibb, Eli Lilly, and GlaxoSmithKline. From 2004 to 2005, Dr Turner was on the speakers bureaus of AstraZeneca, Bristol-Myers Squibb, and Eli Lilly. No other disclosures were reported.
Funding/Support: This study was supported by grant KS2011(1)-120 from the Dutch Brain Foundation (Dr de Jonge).
Role of the Funder/Sponsor: The Dutch Brain Foundation had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: Jay Augsberger, MD, Department of Veterans Affairs, assisted in obtaining FDA Drug Approval Packages from the FDA’s Freedom of Information Office; he received no financial compensation.