Context Policy makers and physician organizations have recently called for more comparative effectiveness (CE) research, yet little is known about existing CE studies.
Objective To describe the characteristics of recently published CE studies evaluating medications.
Design, Setting, and Participants Analysis of all randomized trials, observational studies, and meta-analyses involving medications published in the 6 general medicine and internal medicine journals with the highest impact factor between June 1, 2008, and September 30, 2009.
Main Outcome Measures The prevalence and characteristics of CE studies (those comparing existing, active treatments) and non-CE studies (those involving novel therapies or those using an inactive control).
Results We identified 328 studies evaluating medications, 104 of which were CE studies. Among the CE studies, 45 (43%; 95% confidence interval [CI], 34%-53%) compared different medications, 11 (11%; 95% CI, 5%-18%) compared medications with nonpharmacologic interventions, 32 (31%; 95% CI, 22%-41%) compared different pharmacologic strategies, and 16 (15%; 95% CI, 9%-24%) compared different medication dosing schedules. Twenty (19%; 95% CI, 12%-28%) CE studies focused on safety and 2 (2%; 95% CI, 0%-7%) included cost-effectiveness analyses. Comparative effectiveness studies were less likely than non-CE studies to have been exclusively commercially funded: 13% (95% CI, 8%-22%) vs 45% (95% CI, 38%-52%), respectively (P < .001). In total, 90 (87%; 95% CI, 78%-92%) of the CE studies received noncommercial funding, including 66 that received government funding (63%; 95% CI, 53%-73%). Of 212 randomized trials, 97 (46%; 95% CI, 39%-63%) used an active comparator; the rest used an inactive control. Active-comparator trials were less likely than trials with inactive controls to report positive results: 44% (95% CI, 33%-55%) vs 66% (95% CI, 57%-75%), respectively (P = .002).
Conclusions In these high-impact general medicine journals, approximately one-third of studies evaluating medications were CE studies. Of these studies, only a minority compared pharmacologic and nonpharmacologic therapies, few focused on safety or cost, and most were funded by noncommercial funding sources.
Comparative effectiveness (CE) research refers to studies that compare “the benefits and harms of different interventions and strategies to prevent, diagnose, treat, and monitor health conditions.”1,2 In contrast to research on novel interventions and strategies, CE studies help physicians use existing treatments and treatment strategies more effectively.3-5 Comparative effectiveness studies also help physicians determine which interventions and strategies are most effective, safest, or least costly when multiple options are available.3-5 Because of concerns that insufficient research is currently devoted to improving the use of existing therapies, the US Congress recently passed legislation that will provide more than $1 billion to support CE studies.5,6
Despite the recent interest in CE research, only limited information is available about existing CE studies.5,7 Two previous inventories of CE research (not peer-reviewed) provide only basic information about the characteristics of CE studies.1,8 Additional information about existing CE research could help guide policy makers as they determine the amount and types of CE research that are most needed.
In this study, we describe the characteristics of recently published CE research concerning medications from the 6 general medicine and internal medicine journals with the highest impact factors.
We identified all randomized trials, observational studies, and meta-analyses involving medications published in the 16-month period between June 1, 2008, and September 30, 2009, in the 6 general medicine and internal medicine journals with the highest impact factor (New England Journal of Medicine, Lancet, JAMA, Annals of Internal Medicine, BMJ, and Archives of Internal Medicine).9 One author (M.H.) performed the literature review, abstracted all information from included studies, and characterized each study regarding outcomes of interest according to prespecified criteria. The studies were identified by manually reviewing all original articles with a formal abstract published in the 6 journals during the specified time period.
All human studies in which medications (either a specific medication or a class of medications) were compared with either an active treatment (such as another medication or a nonpharmacologic therapy) or an inactive control (either a placebo or no therapy) were selected for inclusion. Randomized trials, case-control studies, cohort studies, and meta-analyses were included. Systematic reviews, before and after studies without a control group, and modeling studies not involving actual patients were excluded. Studies involving only topical therapies, including locally active injected medications (but not systemically active injected medications), also were excluded because these therapies were considered nonpharmacologic interventions for the purposes of this study.
Each selected study was reviewed to determine the study design, sample size, disease under study, whether the study medications were US Food and Drug Administration (FDA)–approved when study enrollment began, whether the study was commercially funded (ie, received funding aside from free study medications from a for-profit company), whether the study received funding from a government agency (either in the United States or another country), and whether the lead author reported financial ties to commercial entities. Randomized trials reporting a primary study end point were reviewed to determine the trial phase (ie, phase 1, 2, 3, 4, or unassigned) as reported either in the text of the study, the trial registry, or both, and whether a noninferiority data analysis (either alone or in addition to a superiority analysis) was performed.
Studies were classified as CE studies if they (1) involved existing (rather than novel) medications; and (2) compared active therapies (active-comparator studies) rather than an active therapy with an inactive control such as a placebo (inactive-comparator studies). We classified studies as involving existing medications if all the study medications were FDA-approved. Randomized trials involving a noninferiority analysis were categorized as studies of novel therapies because noninferiority trials are generally conducted in an effort to obtain FDA approval for an unapproved or off-label use of a medication.10,11
Because of the previous suggestion that more CE studies are needed that compare pharmacologic with nonpharmacologic treatments, different therapeutic strategies, and determination of optimal medication usage,5,7,12 we further classified CE studies into 1 of 4 categories: (1) studies comparing 2 or more medications with each other; (2) studies comparing medications with nonpharmacologic interventions (eg, surgery or lifestyle interventions); (3) studies comparing different pharmacologic strategies for medication use, specifically different therapeutic initiation or treatment targets (eg, different hemoglobin A1C goals in patients with diabetes) and determination of optimal medication usage or different monitoring parameters (eg, adjustment of diabetes medications using continuous glucose monitoring vs conventional glucose monitoring); and (4) studies comparing different medication doses, durations or frequencies of treatment, or different medication formulations.
Because of previous suggestions for conducting more CE studies that focus on medication safety and that include cost-effectiveness analyses,2,5,7,13 we assessed all studies to determine whether the primary focus was medication safety and whether a formal cost analysis was performed. Studies were designated as primarily safety studies if they focused on adverse medication interactions, fetal outcomes following maternal medication use, medication discontinuation rates, or adverse outcomes unrelated to the intended medication indication (eg, cancer rates in patients using statins). Randomized trials were designated as primarily safety studies if any of the primary study end points met these criteria. Previous research14-18 has also classified studies as safety studies according to the judgment of a reviewing author. Because phase 4 randomized trials are postmarketing surveillance trials designed, in part, to identify less common adverse medication effects,19,20 we also assessed the prevalence of phase 4 trials as a secondary measure of safety research.
Eligible randomized trials were reviewed to determine if there was a statistically significant positive result for the newer therapy relative to the control group with regard to any primary trial end point. Trials were considered ineligible for this analysis if there was not a prespecified primary end point or if there were 3 or more treatment groups because, in these cases, it was not possible to determine which 2 groups should be considered when determining if there was a positive result.
Quality Control Assessment
Because there was some subjectivity in determining whether studies should be classified as active-comparator studies vs inactive-comparator studies and in determining the type of comparison performed (eg, comparisons of different medications or of medication use vs nonpharmacologic therapy) and in determining whether the study focused primarily on safety, a quality control check was performed. One author (D.M.) independently reviewed all studies published during the 2nd, 6th, 10th, and 14th months of the 16-month study period and independently classified studies with regard to the above-mentioned outcomes. Unlike the primary data extractor (M.H.), D.M. was blinded to each study's funding source during the quality control check. Interrater agreement was assessed by calculating κ statistics.
We compared several key characteristics including funding, design, and outcomes7 in comparative vs noncomparative effectiveness studies; among CE studies that received exclusive, partial, or no commercial funding; and between studies that did vs did not receive government funding. We also compared key characteristics of randomized trials that used active vs inactive comparators (according to our definitions, active-comparator trials, unlike CE trials, may involve non–FDA-approved medications and may use noninferiority analyses).
We performed post hoc power calculations for 3 comparisons that we prespecified as key analyses of interest: the percentage of CE vs non-CE studies receiving exclusive commercial funding21; the percentage of active- comparator vs inactive-comparator randomized trials reporting positive results22; and the percentage of exclusively commercially-funded CE studies vs CE studies receiving noncommercial funding to compare different pharmacologic strategies.23 We assumed a 20% exclusive commercial funding rate for CE studies, a 30% exclusive commercial funding rate for non-CE studies, a 50% positive results rate for active-comparator trials, and a 65% positive results rate for inactive-comparator trials. Additionally, we assumed that 35% and 15% of noncommercially funded and exclusively commercially funded CE studies, respectively, would involve comparisons of different pharmacologic strategies. Based on these assumptions and the numbers of included studies, the power to detect a significant difference for these 3 variables was 92.4%, 89.0%, and 22.7%, respectively, at an α level of .05.
Two-sided χ2 tests with an a priori level of significance of P ≤ .05 were used for all comparisons reported in this study. Confidence intervals (CIs) were calculated as exact 95% CIs. All statistical analyses were performed using STATA version 11.0 (StataCorp LP, College Station, Texas).24
Further details of the study methods are available online (eAppendix).
Fifteen hundred original articles were reviewed in the 6 medical journals and 326 were randomized trials, observational studies, or meta-analyses involving medications that met the study inclusion criteria. Two of these articles reported on 2 distinct medication interventions, each of which independently met the study inclusion criteria; each intervention was counted separately, yielding a total of 328 medication studies for the analysis. Of these, 104 (32%; 95% CI, 27%-37%) were CE studies and 224 (68%; 95% CI, 63%-73%) were non-CE studies. A list of these CE studies and noncomparative effectiveness studies is available online (eAppendix).
Of the 104 CE studies, 45 compared 2 or more medications with each other (43%; 95% CI, 34%-53%), 11 compared medications with nonpharmacologic interventions (11%; 95% CI, 5%-18%), 32 compared different pharmacologic strategies (31%; 95% CI, 22%-41%), and 16 compared different medication doses, durations or frequencies of treatment, or different medication formulations (15%; 95% CI, 9%-24%) (Figure).
Characteristics of the CE and non-CE studies are shown in Table 1. Funding sources, design, and outcomes of these studies are shown in Table 2. CE studies were more likely than non-CE studies to have received government funding from any country, to have received US government funding, and to have included a cost-effectiveness analysis. CE studies were less likely than non-CE studies to be exclusively commercially funded, to have a lead author with commercial ties, and to report positive results that were statistically significant.
The characteristics of the commercially funded vs noncommercially funded CE studies, and government-funded vs nongovernment-funded CE studies, are shown in Table 3. Exclusively commercially funded randomized trials were more likely than noncommercially funded trials to be phase 4 postmarketing surveillance trials. Otherwise, there were no statistically significant differences with regard to any of the characteristics we examined; however, the power to detect differences was limited for most of these outcomes.
In total, noncommercial entities jointly or exclusively funded 90 of the 104 CE studies (87%; 95% CI, 78%-92%), including 10 of the 11 CE studies comparing medications with nonpharmacologic interventions (91%; 95% CI, 59%-100%), 30 of the 32 CE studies comparing different pharmacologic strategies (94%; 95% CI, 79%-99%), 18 of the 20 CE studies focusing on safety (90%; 95% CI, 68%-99%), 8 of the 13 phase 4 postmarketing surveillance CE trials (62%; 95% CI, 32%-86%), and both of the 2 CE studies with a cost-effectiveness analysis (100%, 95% CI, 16%-100%). Government entities at least partially funded 66 of the 104 CE studies (63%; 95% CI, 53%-73%).
Of the 212 CE and non-CE randomized trials reporting on a primary end point, 97 used an active-comparator group (46%; 95% CI, 39%-63%) and 115 used an inactive control (54%; 95% CI, 47%-61%). Exclusively commercially funded trials were less likely than trials receiving noncommercial funding to use an active comparator (36 of 108 [33%; 95% CI, 25%-43%] vs 61 of 104] [59%; 95% CI, 49%-68%]; P < .001). Eligible active-comparator trials were less likely than eligible inactive-comparator trials to report positive results (37 of 84 [44%; 95% CI, 33%-55%] vs 76 of 115 [66%; 95% CI, 57%-75%]; P = .002).
Exclusively commercially funded trials (both active-comparator and inactive-comparator) were more likely than jointly or noncommercially funded trials to report positive results (69 of 106 [65%; 95% CI, 55%-74%] vs 44 of 93 [47%; 95% CI, 37%-58%]; P = .01). Among the exclusively commercially funded trials, active-comparator trials appeared to be less likely than inactive-comparator trials to report positive results, although this difference did not reach statistical significance (18 of 34 [53%; 95% CI, 35%-70%] vs 51 of 72 [71%; 95% CI, 59%-81%]; P = .07).
Of the 97 active-comparator trials reporting on a primary study end point, 23 used a noninferiority analysis (24%; 95% CI, 16%-33%) and 18 of these were exclusively commercially funded (78%; 95% CI, 56%-93%). Twenty-four of 97 active-comparator trials involved non−FDA-approved medications (25%; 95% CI, 17%-35%), whereas 53 of 115 inactive-comparator trials did (46%; 95% CI, 37%-56%).
Fifty-eight of the 328 CE and non-CE medication studies focused primarily on safety (18%; 95% CI, 14%-22%) and 20 of these were CE studies (34%; 95% CI, 22%-48%). Eight of the 58 safety studies were randomized trials reporting on a primary study end point, while 50 were observational studies, meta-analyses, and randomized trials not reporting on a primary end point. Thirty-two of the 212 randomized trials reporting on a primary end point were phase 4 postmarketing surveillance trials (15%; 95% CI, 11%-21%) and 13 of these were CE trials (41%; 95% CI, 24%-59%). In total, 51 of the safety studies (88%; 95% CI, 77%-95%; both CE and non-CE) received at least partial noncommercial funding, while 30 received government funding (52%; 95% CI, 38%-65%).
Of the 328 total studies, 2 included cost-effectiveness analyses (1%; 95% CI, 0%-2%), and both of these were CE studies (100%; 95% CI, 16%-100%).
A total of 75 of the 328 studies were reviewed for the quality control check, and 23 of these were classified as CE studies in both the primary data analysis and in the quality review. There was agreement between the data analysis and the quality review for 73 of the 75 studies with regard to classification as either an active-comparator vs inactive-comparator study (97%; 95% CI, 91%-100% [κ 0.94; 95% CI, 0.85-1.00]); for 73 of 75 studies with regard to classification as either a safety study or nonsafety study (97%; 95% CI, 91%-100% [κ 0.93; 95% CI, 0.83-1.00]); and for 22 of 23 CE studies with regard to classification of the type of comparison performed (96%; 95% CI, 79%-100% [κ 0.93; 95% CI, 0.81-1.00]).
We identified 104 CE medication studies published during a 16-month period in 6 general medicine and internal medicine journals with the highest impact factors. These CE medication studies represent 32% of all randomized trials, observational studies, and meta-analyses involving medications during this time period. The fact that only 32% of the studies in these journals evaluating medications met our criteria for CE research supports previous concerns that only limited clinical research is currently devoted to helping physicians use existing therapies more effectively.3,5-7
Our study also showed that 11% of CE studies compared medications with nonpharmacologic interventions, whereas 31% compared different pharmacologic strategies. Studies involving nonpharmacologic treatments and different pharmacologic strategies are particularly important because they help clinicians make fundamental therapeutic decisions.5,7,12 Our findings support the view that there is a relative lack of these types of studies.12
We also found that CE studies were less likely than non-CE studies to have been exclusively commercially funded and that the vast majority of CE studies relied on noncommercial funding, including government funding. Noncommercial entities may fund a large portion of CE research because commercial entities presumably devote much of their research to the development of novel therapies and to funding inactive-comparator studies aimed at expanding indications for their products.25,26 Our findings highlight the importance of noncommercial and government funding for CE research.3
Our finding that 19% of CE studies (and 18% of all studies) focused on safety, and that only a minority of randomized trials were phase 4 postmarketing surveillance studies, suggests that efficacy outcomes are emphasized to a substantially greater extent than safety outcomes in medication research. Although a predominance of efficacy-oriented studies may seem appropriate, the withdrawal of several top-selling medications from the market has resulted in questions regarding whether safety concerns are adequately emphasized in existing medication studies.27,28 Our results could provide important information for policy makers who will need to determine whether increased emphasis should be placed on safety in future medication research. In addition, our finding that noncommercial entities funded the vast majority of safety studies supports concerns that commercially funded research may underemphasize safety issues.28-30
Our study also found that only 2% of CE studies and 1% of all studies included formal cost-effectiveness analyses. Cost-effectiveness analyses are critical for promoting efficient and effective health care.30-32 The small proportion of studies that included cost-effectiveness analyses may reflect policies or editorial priorities of journal editors favoring publication of clinical outcome reports rather than a true dearth of cost-effective studies.
In our studies, more than half of randomized trials used an inactive control group rather than an active comparator. A disproportionate percentage of the inactive-comparator studies were exclusively commercially funded, which is consistent with the findings of other studies.8,23,33 Inactive comparators may be appropriate when alternative therapies do not exist for the disease under study.34 We did not attempt to estimate how many of the inactive-comparator trials in our analysis could have used an active comparator because of the technical challenges of making this determination; however, less than half of the trials in our analysis used an active comparator, supporting the suggestion that inactive comparators often may be used inappropriately particularly in commercially funded research.26,35
Inactive-comparator trials were more likely than active-comparator trials to report positive results, presumably because active comparators are more effective than inactive controls. This finding also seemed applicable for commercially funded trials, which were nonsignificantly more likely to report a positive result if an inactive comparator was used. This may provide one of several potential explanations for the widespread finding—confirmed by our study—that commercially funded research is more likely to report positive results than noncommercially funded research.14,23,36,37 Commercially funded trials may be more likely to show positive results in part because a higher percentage of them are inactive-comparator trials.38
In addition, our study showed that 24% of active-comparator randomized trials used a noninferiority analysis and most of these noninferiority trials were exclusively commercially funded. There is considerable debate about the appropriate role of noninferiority trials, which are intended to prove therapeutic equivalency rather than to clarify the optimal therapy.10,11,39,40 Some argue that this type of analysis may be helpful since it shows whether therapies with different adverse effect profiles and costs are equally effective.41 While the appropriate role of noninferiority analyses remains unclear, our study provides the best estimate we are aware of for the prevalence of noninferiority trials published in the 6 journals in our study.
There are several limitations to our study. First, our definition of CE studies is imperfect because some might consider certain inactive-comparator studies that evaluate widely used yet unproven interventions to be CE studies (eg, a study of antibiotics for the treatment of asymptomatic Helicobacter pylori infection). Additionally, our definition does not perfectly capture studies of existing medications, ie, some non−FDA-approved medications may be available for clinical use in countries other than the United States, whereas some trials using a noninferiority design may concern established medication indications. Conversely, not all off-label uses of medications are conducted using a noninferiority design.
Second, our study focuses primarily on medication studies, but CE research may concern a broad array of interventions and strategies such as pharmacologic therapies, procedures, behavioral changes, diagnostic tests, or health care delivery strategies.1
Third, we only included studies published in the most prominent general medicine and internal medicine journals, and therefore our findings may not be representative of all existing CE research. These were selected, however, because they are the most widely read, quoted, and covered by the media, and thus are disproportionately likely to influence clinicians.
Fourth, since there is often a lag of several years between the time a study is conceived and when the results are published, it is possible that the CE studies we analyzed do not accurately reflect CE studies that are currently being conducted but have yet to be published.
Fifth, the author who extracted all the study data was not blinded to each study's funding source, which may have biased the results. Since classifications were made using predefined objective criteria, it seems likely that bias could have had only a minimal impact on the results. Additionally, we are reassured by the high concordance rate between the assessments made by the abstracting author and by the author whose quality control check was blinded.
Overall, this study of CE research involving medications underscores the importance of the recent legislation passed in the United States to expand public funding for CE studies. In particular, our findings suggest government and noncommercial support should be increased for studies involving nonpharmacologic therapies, for studies comparing different therapeutic strategies, and for studies focusing on the comparative safety and cost of different therapies. In addition, our findings highlight the need for regulatory agencies like the FDA to require active-comparator trials for medication approval whenever feasible.42
Corresponding Author: Michael Hochman, MD, Department of Medicine, Keck School of Medicine, University of Southern California, 1975 Zonal Ave, KAM 500, Los Angeles, CA 90089-9034 (mhochman@usc.edu) or (meh1979@gmail.com).
Author Contributions: Dr Hochman had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Hochman, McCormick.
Acquisition of data: Hochman.
Analysis and interpretation of data: Hochman, McCormick.
Drafting of the manuscript: Hochman.
Critical revision of the manuscript for important intellectual content: Hochman, McCormick.
Statistical analysis: Hochman, McCormick.
Administrative, technical, or material support: Hochman, McCormick.
Study supervision: McCormick.
Financial Disclosures: None reported.
Additional Contributions: We would like to thank Steffie Woolhandler, MD, MPH, and David Himmelstein, MD, Cambridge Health Alliance and Harvard Medical School, for their thoughtful help editing the manuscript. Neither individual received any compensation for their assistance.
This article was corrected online for typographical errors on 3/9/2010.
2.Institute of Medicine. Initial National Priorities for Comparative Effectiveness Research. Washington, DC: The National Academies Press; 2009
3.Stafford RS, Wagner TH, Lavori PW. New but not improved? incorporating comparative-effectiveness information into FDA labeling.
N Engl J Med. 2009;361(13):1230-123319675326
PubMedGoogle ScholarCrossref 4.Sox HC, Greenfield S. Comparative effectiveness research: a report from the Institute of Medicine.
Ann Intern Med. 2009;151(3):203-20519567618
PubMedGoogle ScholarCrossref 7.American College of Physicians. Improved Availability of Comparative Effectiveness Information: An Essential Feature for a High-Quality and Efficient United States Health Care System. Position Paper. Philadelphia, PA: American College of Physicians; 2008
8.Jacobson GA. CRS report for Congress: comparative clinical effectiveness and cost-effectiveness research: background, history, and overview: October 15, 2007.
http://aging.senate.gov/crs/medicare6.pdf. Accessed February 11, 2010 10.Piaggio G, Elbourne DR, Altman DG, Pocock SJ, Evans SJ.CONSORT Group. Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement.
JAMA. 2006;295(10):1152-116016522836
PubMedGoogle ScholarCrossref 12.Volpp KG, Das A. Comparative effectiveness—thinking beyond medication A versus medication B.
N Engl J Med. 2009;361(4):331-33319625714
PubMedGoogle ScholarCrossref 14.Ridker PM, Torres J. Reported outcomes in major cardiovascular clinical trials funded by for-profit and not-for-profit organizations: 2000-2005.
JAMA. 2006;295(19):2270-227416705108
PubMedGoogle ScholarCrossref 15.Chan AW, Krleza-Jerić K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research.
CMAJ. 2004;171(7):735-74015451835
PubMedGoogle ScholarCrossref 16.Ernst E, Pittler MH. Assessment of therapeutic safety in systematic reviews: literature review.
BMJ. 2001;323(7312):54611546700
PubMedGoogle ScholarCrossref 18.Pitrou I, Boutron I, Ahmad N, Ravaud P. Reporting of safety results in published reports of randomized controlled trials.
Arch Intern Med. 2009;169(19):1756-176119858432
PubMedGoogle ScholarCrossref 19.Fontanarosa PB, Rennie D, DeAngelis CD. Postmarketing surveillance—lack of vigilance, lack of trust.
JAMA. 2004;292(21):2647-265015572723
PubMedGoogle ScholarCrossref 21.Bhandari M, Busse JW, Jackowski D,
et al. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials.
CMAJ. 2004;170(4):477-48014970094
PubMedGoogle Scholar 22.Djulbegovic B, Lacevic M, Cantor A,
et al. The uncertainty principle and industry-sponsored research.
Lancet. 2000;356(9230):635-63810968436
PubMedGoogle ScholarCrossref 23.Kjaergard LL, Als-Nielsen B. Association between competing interests and authors' conclusions: epidemiological study of randomised clinical studies published in the BMJ .
BMJ. 2002;325(7358):249-25212153921
PubMedGoogle ScholarCrossref 24.StataCorp. Stata Statistical Software: Release 11. College Station, TX: StataCorp LP; 2009
25.Lewis TR, Reichman JH, So AD. The case for public funding and public oversight of clinical trials, the economists' voice: 2007, Vol 4, issue 1, article 3.
http://www.bepress.com/ev/vol4/iss1/art3. Accessed February 11, 2010 26.Vitiello B, Heiligenstein JH, Riddle MA, Greenhill LL, Fegert JM. The interface between publicly funded and industry-funded research in pediatric psychopharmacology: opportunities for integration and collaboration.
Biol Psychiatry. 2004;56(1):3-915219466
PubMedGoogle ScholarCrossref 27.Vandenbrouke JP, Psaty BM. Benefits and risks of drug treatments: how to combine the best evidence on benefits with the best data about adverse effects.
JAMA. 2008;3(20):2417-241919033592
PubMedGoogle ScholarCrossref 28.Baciu A, Stratton K, Burke SP.Committee on the Assessment of the US Drug Safety System. The Future of Drug Safety: Promoting and Protecting the Health of the Public. Washington, DC: The National Academies Press; 2006
29.Papanikolaou PN, Churchill R, Wahlbeck K, Ioannidis JP. Safety reporting in randomized trials of mental health interventions.
Am J Psychiatry. 2004;161(9):1692-169715337661
PubMedGoogle ScholarCrossref 30.American College of Physicians. Information on cost-effectiveness: an essential product of a national comparative effectiveness program.
Ann Intern Med. 2008;148(12):956-96118483128
PubMedGoogle ScholarCrossref 32.Clement FM, Harris A, Li JJ, Yong K, Lee KM, Manns BJ. Using effectiveness and cost-effectiveness to make drug coverage decisions: a comparison of Britain, Australia, and Canada.
JAMA. 2009;302(13):1437-144319809025
PubMedGoogle ScholarCrossref 33.Katz KA, Karlawish JH, Chiang DS, Bognet RA, Propert KJ, Margolis DJ. Prevalence and factors associated with use of placebo control groups in randomized controlled trials in psoriasis: a cross-sectional study.
J Am Acad Dermatol. 2006;55(5):814-82217052487
PubMedGoogle ScholarCrossref 36.Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: a systematic review.
JAMA. 2003;289(4):454-46512533125
PubMedGoogle ScholarCrossref 37.Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events?
JAMA. 2003;290(7):921-92812928469
PubMedGoogle ScholarCrossref 38.Ioannidis JP. Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?
Philos Ethics Humanit Med. 2008;3:1418505564
PubMedGoogle ScholarCrossref 39.Greene CJ, Morland LA, Durkalski VL, Frueh BC. Noninferiority and equivalence designs: issues and implications for mental health research.
J Trauma Stress. 2008;21(5):433-43918956449
PubMedGoogle ScholarCrossref 40.Tuma RS. Trend toward noninferiority trials may mean more difficult interpretation of trial results.
J Natl Cancer Inst. 2007;99(23):1746-174818042926
PubMedGoogle ScholarCrossref 41.Gomberg-Maitland M, Frison L, Halperin JL. Active-control clinical trials to establish equivalence or noninferiority: methodological and statistical concepts linked to quality.
Am Heart J. 2003;146(3):398-40312947355
PubMedGoogle ScholarCrossref