Characteristics of Clinical Trials Registered in, 2007-2010 | Medical Devices and Equipment | JAMA | JAMA Network
[Skip to Navigation]
Sign In
Table 1. Characteristics for All Studies Registered in, All Interventional Studies, and Interventional Trials From October 2004 Through September 2007 and From October 2007 Through September 2010
Table 1. Characteristics for All Studies Registered in, All Interventional Studies, and Interventional Trials From October 2004 Through September 2007 and From October 2007 Through September 2010
Table 2. Clinical Trial Attributes by Therapeutic Area for All Interventional Trials, October 2007–September 2010
Table 2. Clinical Trial Attributes by Therapeutic Area for All Interventional Trials, October 2007–September 2010
Table 3. Trial Characteristics and Summary of Designs for All Interventional Trials, Registered October 2007–September 2010
Table 3. Trial Characteristics and Summary of Designs for All Interventional Trials, Registered October 2007–September 2010
Table 4. Regression Analyses of Interventional Trials Registered in, October 2007–September 2010, and the Reported Use of DMC, Blinding, and Randomization
Table 4. Regression Analyses of Interventional Trials Registered in, October 2007–September 2010, and the Reported Use of DMC, Blinding, and Randomization
Original Contribution
May 2, 2012

Characteristics of Clinical Trials Registered in, 2007-2010

Author Affiliations

Author Affiliations: Duke Translational Medicine Institute, Durham, North Carolina (Drs Califf and Kramer); National Library of Medicine, National Institutes of Health, Bethesda, Maryland (Dr Zarin); Office of Medical Policy, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland (Dr Sherman); and Duke Clinical Research Institute, Durham (Drs Kramer and Tasneem and Ms Aberle).

JAMA. 2012;307(17):1838-1847. doi:10.1001/jama.2012.3424

Context Recent reports highlight gaps between guidelines-based treatment recommendations and evidence from clinical trials that supports those recommendations. Strengthened reporting requirements for studies registered with enable a comprehensive evaluation of the national trials portfolio.

Objective To examine fundamental characteristics of interventional clinical trials registered in the database.

Methods A data set comprising 96 346 clinical studies from was downloaded on September 27, 2010, and entered into a relational database to analyze aggregate data. Interventional trials were identified and analyses were focused on 3 clinical specialties—cardiovascular, mental health, and oncology—that together encompass the largest number of disability-adjusted life-years lost in the United States.

Main Outcome Measures Characteristics of registered clinical trials as reported data elements in the trial registry; how those characteristics have changed over time; differences in characteristics as a function of clinical specialty; and factors associated with use of randomization, blinding, and data monitoring committees (DMCs).

Results The number of registered interventional clinical trials increased from 28 881 (October 2004–September 2007) to 40 970 (October 2007–September 2010), and the number of missing data elements has generally declined. Most interventional trials registered between 2007 and 2010 were small, with 62% enrolling 100 or fewer participants. Many clinical trials were single-center (66%; 24 788/37 520) and funded by organizations other than industry or the National Institutes of Health (NIH) (47%; 17 592/37 520). Heterogeneity in the reported methods by clinical specialty; sponsor type; and the reported use of DMCs, randomization, and blinding was evident. For example, reported use of DMCs was less common in industry-sponsored vs NIH-sponsored trials (adjusted odds ratio [OR], 0.11; 95% CI, 0.09-0.14), earlier-phase vs phase 3 trials (adjusted OR, 0.83; 95% CI, 0.76-0.91), and mental health trials vs those in the other 2 specialties. In similar comparisons, randomization and blinding were less frequently reported in earlier-phase, oncology, and device trials.

Conclusion Clinical trials registered in are dominated by small trials and contain significant heterogeneity in methodological approaches, including reported use of randomization, blinding, and DMCs.

Clinical trials are the central means by which preventive, diagnostic, and therapeutic strategies are evaluated,1 but the US clinical trials enterprise has been marked by debate regarding funding priorities for clinical research, the design and interpretation of studies, and protections for research participants.2-4 Until recently, however, we have lacked tools for comprehensively assessing trials across the broader US clinical trial enterprise.

In 1997, Congress mandated the creation of the registry to assist people with serious illnesses in gaining access to trials.5 In September 2004, the International Committee of Medical Journal Editors (ICMJE) announced a policy, which took effect in 2005, of requiring registration of clinical trials as a prerequisite for publication.6,7 The Food and Drug Administration Amendment Act (FDAAA)8 expanded the mandate of to include most non–phase 1 interventional drug and device trials, with interventional trials defined as “studies in human beings in which individuals are assigned by an investigator based on a protocol to receive specific interventions”9 (eTable 1). The law obliges sponsors or their designees to register trials and record key data elements (effective September 27, 2007), report basic results (September 27, 2008), and report adverse events (September 27, 2009).10

Recent work11,12 highlights the inadequate evidence base of current practice, in which less than 15% of major guideline recommendations are based on high-quality evidence, often defined as evidence that emanates from trials with appropriate designs; sufficiently large sample sizes; and appropriate, validated outcome measures,13,14 as well as oversight by institutional review boards and data monitoring committees (DMCs) to protect participants and ensure the trial's integrity.14

In this article, we examine fundamental characteristics of interventional clinical trials in 3 major therapeutic areas contained in the registry (cardiovascular, mental health, and oncology), focusing on study characteristics (data elements reported in trial registration) that are desirable for generating reliable evidence from clinical trials, including factors associated with use of DMCs, randomization, and blinding.


The methods used by to register clinical studies have been described previously.15-17 Briefly, sponsors and investigators from around the world enter data through a web-based data entry system. The country address of each facility (ie, a site that can potentially enroll participants) was used to group sites into regions according to rubrics used by (Individual countries included in each region are available.) The sample we examined includes studies registered to comply with legal obligations, as well as those registered voluntarily to meet ICMJE requirements or for other reasons. Similarly, data for registered studies include both mandatory and optional elements. Over time, the types, definitions, and optional vs mandatory status of data elements have changed. Mandatory and optional data elements for registration as of August 2011 are shown in eAppendix 1. Data Set

We downloaded an XML data set comprising all 96 346 clinical studies registered with as of September 27, 2010—1 decade after the registry's launch and 3 years after enactment of the FDAAA. We loaded the data set into a relational database (Oracle RDBMS version 11.1g, Oracle Corporation) to facilitate aggregate analysis. This resource, the Database for Aggregate Analysis of (AACT), as well as data definitions, and comprehensive data dictionaries, is available at the Clinical Trials Transformation Initiative website.19

Our analysis was restricted to interventional studies registered with between October 2007 and September 2010. To identify interventional studies, we used the “study type” field from the registry, which included the following choices: interventional, observational, expanded access, and not available (NA) (eAppendix 1). Interventional trials were defined as “studies in human beings in which individuals are assigned by an investigator based on a protocol to receive specific interventions.” In this study, the terms clinical trial, interventional trial, and interventional study are synonomous. Interventional studies were regrouped within the downloaded, derivative database according to the 3 clinical specialties—cardiovascular, oncology, and mental health—that together encompass the largest number of disability-adjusted life-years lost in the United States.20 For this regrouping, we used submitted disease condition terms and Medical Subject Heading (MeSH) terms generated by a National Library of Medicine (NLM) algorithm to develop a methodology to annotate, validate, adjudicate, and implement disease condition terms (MeSH and non-MeSH) to create specialty data sets.

A subset of the 2010 MeSH thesaurus from the NLM21 and a list of non-MeSH disease condition terms provided by data submitters that appeared in 5 or more interventional studies in the analysis data set were reviewed and annotated by clinical specialists at Duke University Medical Center (eAppendix 2). Terms were annotated according to their relevance to a given specialty (Y = relevant, N = not relevant). Specialty data sets were created and the results of algorithmic classifications were validated by comparison with classifications based on manual review. Clinical trials were classed according to date registered and by interventional status. Details regarding the creation of these specialty data sets are provided in an article describing the study methodology.22

Within these specialty data sets, a few data elements are missing because of limitations in the data set or logistical problems in obtaining analyzable information. Specifically, the data element “human subject review” is not present in the public download, and data regarding primary outcomes and oversight authority are not readily analyzable because of the presence of free-text values.

Analytical Methods

Clinical trial characteristics were first assessed overall, by interventional trials, and by 2 temporal subsets: October 2004 through September 2007 and October 2007 through September 2010. The percentage of trials registered before and after enrollment of the first participant was also determined by comparing the date of registration to the date that the first participant was enrolled. Other assessments included clinical trial characteristics, enrollment characteristics, funding source, and number of study sites for all clinical trials vs cardiovascular, mental health, and oncology trials for the latter time period (October 2007–September 2010). Funding sources included industry, NIH, other US federal (excluding NIH), and other. Frequencies and percentages are provided for categorical characteristics; medians and interquartile ranges (IQRs) are provided for continuous characteristics.

Logistic regression analysis was performed to calculate adjusted odds ratios (ORs) with Wald 95% confidence intervals for factors associated with trials that report use of DMCs, randomization, and blinding. A full model containing 9 prespecified characteristics was developed. The first of these was lead sponsor, which the NLM defines as the primary organization that oversees study implementation and is responsible for conducting data analysis.19Collaborators are defined as other organizations (if any) that provide additional support, including funding, design, implementation, data analysis, and reporting. The sponsor (or designee) is responsible for confirming all collaborators before listing them. stores funding organization information in 2 data elements: lead sponsor and collaborator. The NLM classifies submitted agency names in these data elements as industry, NIH, US federal (excluding NIH), or other. We derived probable funding source from the lead sponsor and collaborator fields using the following algorithm: if the lead sponsor was from industry, or the NIH was neither a lead sponsor nor collaborator and at least 1 collaborator was from industry, then the study was categorized as industry funded. If the lead sponsor was not from industry, and NIH was either a lead sponsor or a collaborator, then the study was categorized as NIH funded. Otherwise, if the lead sponsor and collaborator fields were nonmissing, then the study was considered to be funded by other.

Also included in the model were phase (0, 1, 1/2, 2, 2/3, 3, 4, NA); number of participants; trial specialty—cardiovascular, oncology, or mental health (yes/no); trial start year; intervention type (procedure/device, drug or biological, behavioral, dietary supplement, other); and primary purpose (treatment, prevention, diagnostic, other). For the purposes of this modeling, studies reporting multiple intervention types were categorized in the following hierarchy: 1, procedure/device; 2, drug/biological; 3, behavioral; 4, dietary supplement; 5, other. Studies missing a response to any of the data elements used in the model were excluded. The model predicting trials with DMC was also run in 2 additional ways: (1) assuming that those trials missing a response to the question regarding DMC did in fact have a DMC, and (2) assuming that those trials missing a response to the question regarding DMC did not in fact have a DMC.

When possible for all analyses, values of missing methodological trial characteristics were inferred based on other available data. For example, for studies reporting an interventional model of single group and number of groups as 1, the value of allocation was designated as nonrandomized and the value of blinding was designated as open.

SAS version 9 (SAS Institute) was used for all statistical analyses.


Basic characteristics of all studies registered with as of September 27, 2010 (N = 96 346), all interventional trials registered during the same period (n = 79 413), and 2 recent subsets of interventional trials (October 2004–September 2007 and October 2007–September 2010) are shown in Table 1. The number of trials submitted for registration increased from 28 881 to 40 970 during the 2 periods. A decline in the numbers of missing data elements occurred for some characteristics. The rate of registered trials not reporting use of DMCs decreased from 57.9% to 18% between the 2 time periods; not reporting either enrollment number or type (anticipated or actual) decreased from 33.8% to 1.8%; not reporting randomization decreased from 5.6% to 4.2%; and not reporting blinding decreased from 3.5% to 2.7%. The rate of missing data for primary purpose increased from 4.6% to 6.8% during these periods. The proportion of trials reporting an NIH lead sponsor decreased from 6.3% to 2.7% during the during the 2 periods, and the proportion of trials with at least 1 North American research site decreased from 61.9% to 57.5%. Other characteristics have not changed substantially.

The proportion of trials registered before beginning participant enrollment increased over the 2 time periods: from 33% (9041/27 667) in October 2004–September 2007 to 48% (19 347/40 333) in October 2007–September 2010.

The majority of clinical trials were small in terms of numbers of participants. Overall, 96% of these trials had an anticipated enrollment of 1000 or fewer participants and 62% had 100 or fewer participants (eFigure). The median number of participants per trial was 58 (IQR, 27-161) for completed trials and 70 (IQR, 35-190) for trials that have been registered but not completed.

Table 2 shows selected characteristics of all interventional trials registered from October 2007 through September 2010 (n = 40 970), as well as characteristics for oncology, cardiovascular, and mental health trials compared with all other trials. Of these 3 categories, oncology trials were most numerous (n = 8992, 21.9%) and comprised the largest proportion of trials listed as currently recruiting: 31.5% vs 9.3% and 10% for cardiovascular and mental health trials, respectively. Oncology trials also constituted the largest proportion of trials that were active but not yet recruiting (25.8% vs 7.4% for cardiovascular and 7.5% for mental health) and that were oriented toward treatment (25.7% vs 8% for cardiovascular and 9.6% for mental health). Among trials oriented toward prevention, cardiovascular trials comprised the largest group: 10.4% vs 8.1% for oncology and 5.9% for mental health. Cardiovascular trials also accounted for the largest proportion of trials assessing medical devices: 20.2% vs 7.0% for oncology and 3.8% for mental health. As expected, among trials incorporating behavioral interventions, mental health trials were most common: 33.4% vs 8.1% for oncology and 7.2% for cardiovascular.

Enrollment and design characteristics for all interventional trials registered from October 2007 through September 2010 are displayed in Table 3. There was heterogeneity in median anticipated trial size according to specialty. Cardiovascular trials (median anticipated enrollment, 100; IQR, 42-280) tended to be nearly twice as large as oncology trials (median, 54; IQR, 30-120), with mental health trials (median, 85; IQR, 40-200) residing between these 2. Cardiovascular and mental health trials were more oriented toward later-phase research (ie, phases 3 and 4) while oncology trials displayed a higher relative proportion of earlier-phase trials (ie, phases 0 through 2). Trials restricted to women were almost twice as common as trials restricted to men (9.1% vs 5.4%), a difference driven largely by oncology trials (13.8% exclude men, compared with 2.0% [cardiovascular] and 5.8% [mental health]).

There were also differences in age distribution among therapeutic areas. Mental health trials were most likely to permit inclusion of children (17.9% vs 11.3% for oncology and 10.5% for cardiovascular) but were also most likely to exclude elderly participants: 56% of mental health trials excluded participants older than 65 years compared with 8.1% for oncology and 13.3% for cardiovascular.

Geographical differences were also apparent. Cardiovascular trials showed the smallest proportion of studies with at least 1 North American research site (47.9%, vs 65.1% for oncology and 69.1% for mental health) and the most substantial proportion of trials with at least 1 European site (39.9% vs 27.6% and 20.9%, respectively).

Differences in trial design were also evident among therapeutic areas (Table 3). Oncology trials were more likely to involve a single group of participants with no randomization of treatment assignment (64.7% vs 26.2% for cardiovascular and 20.8% for mental health), and the majority of oncology trials (87.6%) were not blinded. Mental health trials, on the other hand, were more likely to be blinded (60.0%, vs 12.4% for oncology and 49.0% for cardiovascular), to use parallel-group design (65.9% vs 32.5% for oncology and 63.2% for cardiovascular), and to use randomization (80.1%, vs 36.3% for oncology and 73.7% for cardiovascular).

Data on funding source and number of sites were available for 37 520 of 40 970 clinical trials registered during the 2007-2010 period (eTable 2). The largest proportion of these trials were not funded by industry or the NIH (47%, n = 17 592) with 16 674 (44%) funded by industry, 3254 (9%) funded by the NIH, and 757 (2.0%) funded by other US federal agencies. The majority of trials were single site (24 788, 66%); 12 732 (34%) were multisite trials. The largest proportion of trials (39%, 14 637/37 520) comprised single-site trials that were not funded by the NIH or by industry (see eTable 2; note: this excluded 3450 trials [8%] with missing data on facility location). These single-site trials not funded by the NIH or industry were typically small: approximately 70% had enrolled or planned to enroll fewer than 100 participants. They were characterized primarily by North American (46.1%) and European (30.1%) representation. Industry-funded multicenter trials included Asian and Pacific sites in 27% of trials and European sites in 41.2% of trials, with 33.4% of trials not involving any North American sites.

Regression analyses comparing trial characteristics as they relate to use of DMCs, blinding, and randomization are displayed in Table 4. Compared with trials in which industry was the lead sponsor, other types of lead sponsors were more likely to report use of DMCs with DMCs most common among NIH-sponsored trials (adjusted OR, 9.09; 95% CI, 7.28-11.34). Reported use of DMCs was less common in industry-sponsored vs NIH-sponsored trials (adjusted OR, 0.11; 95% CI, 0.09-0.14). Relative to phase 3 trials, earlier- and later-phase trials were less likely to report use of DMCs (adjusted OR, 0.83; 95% CI, 0.76-0.91 [earlier phase]; adjusted OR, 0.52; 95% CI, 0.47-0.58 [later phase]). Compared with cardiovascular and oncology trials, mental health trials were less likely to report use of DMCs. When compared with trials evaluating drugs or biologics, trials of behavioral interventions were less likely to report use of DMCs.

There were small differences in reporting of blinding or randomization by different lead sponsor organizations. For example, trials in which a US federal agency (excluding the NIH) or another sponsor was the lead sponsor were less likely to report use of blinding (adjusted OR, 0.65; 95% CI, 0.51-0.83; and adjusted OR, 0.90; 95% CI, 0.84-0.96, respectively). Relative to phase 3 trials, earlier- and later-phase trials were also less likely to report use of blinding (adjusted OR, 0.66; 95% CI, 0.60-0.72 [earlier phase]; adjusted OR, 0.50; 95% CI, 0.45-0.55 [later phase]) and randomization (adjusted OR, 0.28; 95% CI, 0.25-0.31[earlier phase]; adjusted OR, 0.37; 95% CI, 0.33-0.42 [later phase]). Oncology trials were less likely to use randomization (adjusted OR, 0.20; 95% CI, 0.19-0.22) and blinding (adjusted OR, 0.10; 95% CI, 0.09-0.11) while mental health trials were not more likely to use randomization (adjusted OR, 1.03; 95% CI, 0.92-1.15) but were more likely to use blinding (adjusted OR, 1.43; 95% CI, 1.30-1.57). When compared with trials evaluating drugs or biologics, trials of behavioral interventions were less likely to report use of blinding (adjusted OR, 0.63; 95% CI, 0.56-0.71) but more likely to use randomization (adjusted OR, 3.22; 95% CI, 2.69-3.84). Trials of dietary supplements were more likely to use blinding (adjusted OR, 2.95; 95% CI, 2.45-3.54) and randomization (adjusted OR, 2.65; 95% CI, 2.10-3.35) and procedure and device trials were less likely to use blinding (adjusted OR, 0.51; 95% CI, 0.47-0.56) and randomization (adjusted OR, 0.76; 95% CI, 0.69-0.83).

More recent trials (reference: per 1-year increment) were more likely to report use of DMCs (adjusted OR, 1.06; 95% CI, 1.05-1.08) and blinding (adjusted OR, 1.02; 95% CI, 1.01-1.04) but no more likely to use randomization (adjusted OR, 1.01; 95% CI, 0.99-1.02). Larger trials were more likely to report use of DMCs (adjusted OR, 1.03; 95% CI, 1.01-1.05) and randomization (adjusted OR, 1.02; 95% CI, 1.00-1.05). Diagnostic trials were less likely to report use of all 3 methods (DMCs: adjusted OR, 0.64; 95% CI, 0.54-0.75; blinding: adjusted OR, 0.57; 95% CI, 0.48-0.68; randomization: adjusted OR, 0.23; 95% CI, 0.19-0.27) while prevention trials were more likely to use all 3 compared with treatment trials (DMCs: adjusted OR, 1.17; 95% CI, 1.06-1.30; blinding: adjusted OR, 1.15; 95% CI, 1.05-1.27; randomization: adjusted OR, 1.45; 95% CI, 1.28-1.64). Finally, an analysis of blinding using only randomized trials produced results similar to the blinding analysis using all interventional trials, and oncology trials in particular were less likely to report use of blinding in the context of a randomized design (χ2 = 933; P < .001).


Clinical studies registered in the database are dominated by small, single-center trials, many of which are not funded by the NIH or industry. Many registered trials contain significant heterogeneity in methodological approaches, including reported use of randomization, blinding, and DMCs. Although has a number of limitations, it is the largest aggregate resource for informing policy analysis about the US clinical trials enterprise. We anticipate that the “sunshine” on the national clinical trials portfolio brought about by, coupled with the greater ease of obtaining an analysis data set from the database for AACT,19 will engender much-needed debate about clinical trial methodologies and funding allocation.

Many of the differences noted in the present study have been identified before and likely represent variation in appropriate approaches for particular diseases. Reviews of samples from the literature in 198023 and 200024 raised similar questions, for which this report provides a contemporary and more comprehensive sample. Despite concerns previously articulated by Meinert et al23 and Chan and Altman24—concerns that included a relatively high prevalence of clinical trials with inadequate sample sizes and insufficiently described methodologies—disparities still remain across specialties. This in turn raises questions about why such heterogeneity persists, whether the portfolio documented by this analysis suffices to address gaps in evidence, and the reasons underlying differences in trial methodology. It is particularly important to identify cases in which such methodological differences lack adequate scientific justification, as they may present an opportunity for improving the public health through adjustments to research investment strategies and methods.

Implications for Policy and Strategy

The fact that 50% of interventional studies registered from October 2007 to September 2010 by design include fewer than 70 participants may have important policy implications. Small trials may be appropriate in many cases (eg, earlier-phase drug evaluations, or investigations of biological or behavioral mechanisms, rather than clinical outcomes). Particularly in oncology, there is a growing sense that small trials based on genetics or biomarkers can yield definitive results.25 However, small trials are unlikely to be informative in many other settings, such as establishing the effectiveness of treatments with modest effects and comparing effective treatments to enable better decisions in practice.26-28 Preliminary observations suggest that many small clinical trials were designed to enroll more participants, raising questions about their ultimate power (D. A. Zarin, MD, written communication, March 28, 2012), but an accurate depiction of these issues requires a more in-depth analysis. These findings raise important issues that should be addressed by detailed, specialty-oriented assessments of the utility of the large number of small trials.

A comprehensive collection of all clinical trials on a global basis would enable the most effective examination of evidence to support medical decisions. The effect of the globalization of clinical research has been debated,29-32 and emerging evidence of differential regional involvement as a function of therapeutic area also raises questions relevant to policy and strategy. Although the World Health Organization (WHO) provides a portal for many trial registries from around the world, unacknowledged duplicate entries make it difficult to determine a unique list of clinical trials; in addition, the overall data set is not available for electronic download, rendering the data unavailable for aggregate analysis.33

Attention to standards for nondrug interventions (eg, biologics, devices, and procedures) as well as study design would also enhance the ability to describe and understand the clinical trials enterprise.34 Indeed, as Devereaux and colleagues35 point out, concepts as fundamental as blinding are shrouded in terminological confusion and ambiguity. Furthermore, lack of clarity surrounding the naming of devices and biologics makes examination of specific medical technologies difficult.

Although the industry is the lead sponsor in only about 36% of interventional trials in this study, these accounted for 59% of all trial participants. Further analysis of trials in each specialty may help elucidate this complex mix of funding, trial size, and location so that policies might be enacted to improve the responsiveness of trials to the needs of public health and the overall research community.

Methodological differences across therapeutic areas are also of interest. The greater focus on earlier-phase trials and biomarker-based personalized medicine25 may explain some of the differences in approach evident with oncology trials, but substantial differences in the use of randomization and blinding across specialties persist after adjustment for phase, raising fundamental questions about the ability to draw reliable inferences from clinical research conducted in that arena.

The reporting of use of a DMC is an optional data element within the registry. The appropriate criteria for determining when a DMC is useful or required remain controversial. Yet the heterogeneity observed by trial phase, disease category, and lead sponsor category in this study (eg, industry vs government sponsorship) may represent an opportunity for mutual learning and compromise among disparate views. The trend toward increased reporting of use of DMCs over time in this study is notable, but clear policies would be useful to those researchers designing trials. For example, many different arrangements can be made for monitoring safety in clinical trials, and the current data only reflect the presence of a typical, well-defined DMC.


Several limitations of our study should be noted. First, does not include all clinical trials. Within the United States, legal requirements for registration do not include phase 1 trials, trials not involving a drug or device, and trials not under US jurisdiction. Also, although many trialists from other countries use to satisfy ICMJE registration requirements,7 other registries around the world may be used.10 However, still accounts for more than 80% of all clinical studies in the WHO portal, as based on comparisons of the number of clinical studies appearing in the registry divided by the number of unique studies appearing in the WHO portal.

Second, there have been changes over time in the data collected, the definitions used, and the rigor with which missing data are pursued. As described in the “Methods” section, some data elements were either missing or unavailable because of practical or logistical limitations. Some of these issues can be addressed by focused analyses in which ancillary data sets are created or review of primary protocols and studies is done. In addition, the potential for serious sanctions for incomplete data under the FDAAA may have improved data collection for those fields in recent years. As noted earlier, we used the study type field from the registry to identify interventional studies; however, we did not perform additional manual screening to identify and exclude possibly misclassified observational studies.

Third, the need for a standard ontology to describe clinical research remains a pressing concern. Current definitions were developed to help individuals find particular trials or were legally mandated without necessarily involving experts or allowing time for testing. Consequently, some data remain ambiguous, complicating efforts to combine and analyze results in a given therapeutic area or across areas. For example, the terms interventional trial and clinical trial are critical for distinguishing purely observational studies from those that assign participants to an interventional therapy. Further refinement of this definition9 could be helpful to those interested in differentiating high-risk invasive interventions from low-risk interventions or distinguishing specific types of behavioral, drug, or device interventions.


The clinical trials enterprise as revealed by the contents of is dominated by small clinical trials and contains significant heterogeneity in methodological approaches, including the use of randomization, blinding, and DMCs. Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high-quality evidence needed to ensure confidence in guideline recommendations. Given the deficit in evidence to support key decisions in clinical practice guidelines11,12 as well as concerns about insufficient numbers of volunteers for trials,36 the desire to provide high-quality evidence for medical decisions must include consideration of a comprehensive redesign of the clinical trial enterprise.

Back to top
Article Information

Corresponding Author: Robert M. Califf, MD, Duke Translational Medicine Institute, 200 Trent Dr, 1117 Davison Bldg, Durham, NC 27710 (

Author Contributions: Dr Califf had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Califf, Zarin, Sherman, Tasneem.

Acquisition of data: Zarin, Tasneem.

Analysis and interpretation of data: Califf, Zarin, Kramer, Aberle, Tasneem.

Drafting of the manuscript: Califf, Sherman.

Critical revision of the manuscript for important intellectual content: Califf, Zarin, Kramer, Aberle, Tasneem.

Statistical analysis: Califf, Aberle.

Obtained funding: Califf, Kramer.

Administrative, technical, or material support: Califf, Zarin, Tasneem.

Study supervision: Califf, Zarin, Sherman.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Califf reports receiving research grants that partially support his salary from Amylin, Johnson & Johnson (Scios), Merck, Novartis Pharma, Schering Plough, Bristol-Myers Squibb Foundation, Aterovax, Bayer, Roche, and Lilly; all grants are paid to Duke University. Dr Califf also consults for, Johnson & Johnson (Scios), Kowa Research Institute, Nile, Parkview, Orexigen Therapeutics, Pozen, Servier International, WebMD, Bristol-Myers Squibb Foundation, AstraZeneca, Bayer-OrthoMcNeil, BMS, Boerhinger Ingelheim, Daiichi Sankyo, GlaxoSmithKline, Li Ka Shing Knowledge Institute, Medtronic, Merck, Novartis, sanofi-aventis, XOMA, and University of Florida; all income from these consultancies is donated to nonprofit organizations, with the majority going to the clinical research fellowship fund of the Duke Clinical Research Institute. Dr Califf holds equity in Nitrox LLC. Dr Kramer is the executive director of the Clinical Trials Transformation Institute (CTTI), a public-private partnership. A portion of Dr Kramer's salary is supported by pooled funds from CTTI members ( Dr Kramer reports receiving a research grant from Pfizer that supports a small percentage of her salary; this grant is paid to Duke University. Dr Kramer also served on an advisory board for the “Pharmacovigilance Center of Excellence” at GlaxoSmithKline, for which she received an honorarium. Financial disclosure information for Drs Califf and Kramer is also publicly available at No other disclosures were reported.

Funding/Support: Financial support for this project was provided by grant U19FD003800 from the US Food and Drug Administration awarded to Duke University for the Clinical Trials Transformation Initiative.

Role of the Sponsors: The US Food and Drug Administration participated in the design and conduct of the study via one of the coauthors (R.E.S.); in the collection, management, analysis, and interpretation of the data; and in the preparation, review, and approval of the manuscript.

Additional Contributions: We gratefully acknowledge the contributions of CTTI Project Leader Jean Bolte, RN (Duke Clinical Research Institute), and National Library of Medicine staff members Nicholas Ide, MS; Rebecca Williams, PhD; and Tony Tse, PhD, to this project. We also thank Jonathan McCall, BA (Duke Clinical Research Institute), for editorial assistance with the manuscript. None received compensation for their contributions besides their salaries.

 Fair tests of treatments in health care. The James Lind Library. Accessed November 28, 2011
DeMets DL, Califf RM. A historical perspective on clinical trials innovation and leadership: where have the academics gone?  JAMA. 2011;305(7):713-71421325190PubMedGoogle ScholarCrossref
Sung NS, Crowley WF Jr, Genel M,  et al.  Central challenges facing the national clinical research enterprise.  JAMA. 2003;289(10):1278-128712633190PubMedGoogle ScholarCrossref
Menikoff J, Richards EP. What the Doctor Didn't Say: The Hidden Truth About Medical Research. New York, NY: Oxford University Press; 2006
 Food and Drug Administration Modernization Act of 1997 (FDAMA): Public Law 105-15. US Food and Drug Administration. Accessed January 3, 2012
DeAngelis CD, Drazen JM, Frizelle FA,  et al; International Committee of Medical Journal Editors.  Clinical trial registration: a statement from the International Committee of Medical Journal Editors.  JAMA. 2004;292(11):1363-136415355936PubMedGoogle ScholarCrossref
 Uniform requirements for manuscripts submitted to biomedical journals: obligation to register clinical trials. International Committee of Medical Journal Editors. Accessed January 3, 2012
9. protocol data element definitions (draft): August 2011. Accessed November 4, 2011
Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The results database: update and key issues.  N Engl J Med. 2011;364(9):852-86021366476PubMedGoogle ScholarCrossref
Lee DH, Vielemeyer O. Analysis of overall level of evidence behind Infectious Diseases Society of America practice guidelines.  Arch Intern Med. 2011;171(1):18-2221220656PubMedGoogle ScholarCrossref
Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines.  JAMA. 2009;301(8):831-84119244190PubMedGoogle ScholarCrossref
Guyatt GH, Oxman AD, Vist GE,  et al; GRADE Working Group.  GRADE: an emerging consensus on rating quality of evidence and strength of recommendations.  BMJ. 2008;336(7650):924-92618436948PubMedGoogle ScholarCrossref
Moher D, Schulz KF, Altman D.CONSORT Group (Consolidated Standards of Reporting Trials).  The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials.  JAMA. 2001;285(15):1987-199111308435PubMedGoogle ScholarCrossref
Gillen JE, Tse T, Ide NC, McCray AT. Design, implementation and management of a web-based data entry system for  Stud Health Technol Inform. 2004;107(Pt 2):1466-147015361058PubMedGoogle Scholar
Zarin DA, Tse T, Ide NC. Trial registration at between May and October 2005.  N Engl J Med. 2005;353(26):2779-278716382064PubMedGoogle ScholarCrossref
Zarin DA, Ide NC, Tse T, Harlan WR, West JC, Lindberg DA. Issues in the registration of clinical trials.  JAMA. 2007;297(19):2112-212017507347PubMedGoogle ScholarCrossref
 Studies by topic: select a location. Accessed January 3, 2012
 AACT database (Aggregate Analysis of Clinical Trials Transformation Initiative. Accessed March 20, 2012
McKenna MT, Michaud CM, Murray CJ, Marks JS. Assessing the burden of disease in the United States using disability-adjusted life years.  Am J Prev Med. 2005;28(5):415-42315894144PubMedGoogle ScholarCrossref
 Medical Subjects Headings (MeSH) thesaurus. Accessed February 20, 2012
Tasneem A, Aberle L, Ananth H,  et al.  The database for Aggregate Analysis of (AACT) and subsequent regrouping by clinical specialty.  PLoS One. 2012;7(3):e3367722438982PubMedGoogle ScholarCrossref
Meinert CL, Tonascia S, Higgins K. Content of reports on clinical trials: a critical review.  Control Clin Trials. 1984;5(4):328-3476394208PubMedGoogle ScholarCrossref
Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals.  Lancet. 2005;365(9465):1159-116215794971PubMedGoogle ScholarCrossref
Kris MG, ed, Meropol NJ, ed, Winer EP, ed. Accelerating progress against cancer: ASCO's blueprint for transforming clinical and translational cancer research. American Society of Clinical Oncology. Accessed February 5, 2012
Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials?  Stat Med. 1984;3(4):409-4226528136PubMedGoogle ScholarCrossref
Peto R, Collins R, Gray R. Large-scale randomized evidence: large, simple trials and overviews of trials.  J Clin Epidemiol. 1995;48(1):23-407853045PubMedGoogle ScholarCrossref
Califf RM, DeMets DL. Principles from clinical trials relevant to clinical practice: part I.  Circulation. 2002;106(8):1015-102112186809PubMedGoogle ScholarCrossref
Glickman SW, McHutchison JG, Peterson ED,  et al.  Ethical and scientific implications of the globalization of clinical research.  N Engl J Med. 2009;360(8):816-82319228627PubMedGoogle ScholarCrossref
Pasquali SK, Burstein DS, Benjamin DK Jr, Smith PB, Li JS. Globalization of pediatric research: analysis of clinical trials completed for pediatric exclusivity.  Pediatrics. 2010;126(3):e687-e69220732941PubMedGoogle ScholarCrossref
Kim ES, Carrigan TP, Menon V. International participation in cardiovascular randomized controlled trials sponsored by the National Heart, Lung, and Blood Institute.  J Am Coll Cardiol. 2011;58(7):671-67621816301PubMedGoogle ScholarCrossref
Califf RM, Harrington RA. American industry and the US Cardiovascular Clinical Research Enterprise an appropriate analogy?  J Am Coll Cardiol. 2011;58(7):677-68021816302PubMedGoogle ScholarCrossref
 International Clinical Trials Registry Platform Search Portal. World Health Organization. Accessed December 8, 2011
DeAngelis CD, Drazen JM, Frizelle FA,  et al; International Committee of Medical Journal Editors.  Is this clinical trial fully registered? a statement from the International Committee of Medical Journal Editors.  JAMA. 2005;293(23):2927-292915911838PubMedGoogle ScholarCrossref
Devereaux PJ, Manns BJ, Ghali WA,  et al.  Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials.  JAMA. 2001;285(15):2000-200311308438PubMedGoogle ScholarCrossref
English RA, Leibowitz Y, Giffin RB. Chapter 2: The state of clinical research in the United States: an overview. In: Transforming Clinical Research in the United States: Challenges and Opportunities: Workshop Summary. National Academies Press. Accessed February 15, 2012