[Skip to Navigation]
Sign In
Invited Commentary
Ethics
August 25, 2022

From Hype to High-Quality Research

Author Affiliations
  • 1Office of Extramural Research, National Institutes of Health, Bethesda, Maryland
JAMA Netw Open. 2022;5(8):e2228683. doi:10.1001/jamanetworkopen.2022.28683

Millar and colleagues1 describe an analysis of the use of promotional language (“hype”) in abstracts of National Institutes of Health (NIH)–funded research projects. They extracted more than 900 000 abstracts from the NIH RePORTER (Research Portfolio Online Reporting Tools: Expenditures and Results) archive and measured the frequency of certain terms from 1985 to 2020. They found that the use of most promotional adjectives—such as novel, critical, and innovative—gradually and continuously increased, while the use of others—such as scalable and transformative—was essentially nonexistent in 1985 but later became much more common beginning in the late 2000s. The authors acknowledge that the existence of hype in grant applications is not surprising, given that “the genre is inherently promissory,” and go on to write that applicants “increasingly describe their work in subjective terms and rely on appeals to emotion.”

While effective communication has long been central to the conduct of science, this report by Millar and colleagues1 highlights how scientists convey the quality—in this case the anticipated quality—of their work. Not surprisingly, scientists believe that their work is and will be of high quality. As has been widely publicized, though, there is increasing concern about a high prevalence of poor-quality science, what is sometimes referred to as a reproducibility crisis or a systematic absence of rigor. The NIH has articulated its concerns. In 2012, Landis et al2 called for improvements in the reporting of randomization, blinding, sample-size calculation, and data management in preclinical research. In 2014, NIH leaders described initiatives to improve the rigor of science that the agency funds.3 Most recently, in 2021, an NIH Working Group issued a report on steps the agency and scientific community can take to enhance the rigor, transparency, and translatability of animal research.4

The report by Millar and colleagues1 raises the question as to whether there are alternatives to promotional adjectives to convey the novelty and rigor—or lack of rigor—of scientific proposals or reports. Bibliometric methods exist to distinguish science that is disruptive as opposed to developmental or incremental. There are approaches to enable a no-hype, high-quality study design. Examples include the preclinical Experimental Design Assistant, which some funding agencies require and others encourage, and the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statement for clinical trials. There are also tools to ensure rigorous conduct and reporting. These include preclinical registered reports, the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guideline, clinical trial registration, standardized clinical trial reports, and the CONSORT (Consolidated Standards of Reporting Trials) guideline.

Beyond these examples, others have undertaken to produce the equivalent of report cards for rigor. Button et al5 found a high prevalence of underpowered studies in preclinical and clinical neuroscience and described this as a “power failure” given the inherent likelihood that underpowered studies produce misleading invalid findings. Ramirez et al6 systematically coded thousands of published articles, finding low rates of reporting on randomization, masking, sample size estimation and of reporting on sex. At the NIH, the AlzPED (Alzheimer’s Disease Preclinical Efficacy Database) posts objective assessments of thousands of scientific reports across more than 20 domains. An analysis of more than 1000 studies found low rates of sample size calculation, blinding, and randomization.7 There are also established, well-accepted methods for grading the quality of clinical research studies that may be of interest for writers of systematic reviews and clinical guidelines.

Scientists may use promotional adjectives to describe their work—both work they propose and work they report—but, as Millar and colleagues1 imply, we as a scientific community need to transcend subjective language and appeals to emotion. Many scientific leaders and funders, including the NIH, are undertaking efforts to improve the rigor of scientific design, conduct, and reporting. Another tool to enhance scientific “truth in advertising” is the expectation, or even requirement, that scientists share their data and methods. The NIH is now implementing its Data Management and Sharing Policy, which will require scientists to prepare and abide by plans for managing and sharing their data according to established standards and in high-quality repositories. Some have referred to the policy as the equivalent of a sea change—someone else’s hype, not mine. Nonetheless, the NIH sees the implementation of this policy as an opportunity to shift the culture of science to one of greater transparency and quality.

The report by Millar and colleagues1 itself reflects a form of transparency. The authors took advantage of the transparency of the NIH RePORTER archive, and they have decided to share their data for others to use. Researchers might seek to explore the data to learn more about how scientists describe their proposals. For example, are certain kinds of scientists more or less likely to use promotional adjectives? How often does the hype in proposals match the rigor of published work? By thinking about these and other questions, all stakeholders in science might take the needed steps to transform hype to higher-quality science and health for all.

Back to top
Article Information

Published: August 25, 2022. doi:10.1001/jamanetworkopen.2022.28683

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2022 Lauer MS. JAMA Network Open.

Corresponding Author: Michael S. Lauer, MD, Office of Extramural Research, Office of the Director, National Institutes of Health, One Center Drive, Room 144, Bethesda, MD 20892 (michael.lauer@nih.gov).

Conflict of Interest Disclosures: None reported.

References
1.
Millar  N, Batalo  B, Budgell  B. Trends in the use of promotional language (hype) in abstracts of successful National Institutes of Health grant applications, 1985-2020.  JAMA Open Netw. 2022;5(8):e2228676. doi:10.1001/jamanetworkopen.2022.28676
2.
Landis  SC, Amara  SG, Asadullah  K,  et al.  A call for transparent reporting to optimize the predictive value of preclinical research.   Nature. 2012;490(7419):187-191. doi:10.1038/nature11556 PubMedGoogle ScholarCrossref
3.
Collins  FS, Tabak  LA.  Policy: NIH plans to enhance reproducibility.   Nature. 2014;505(7485):612-613. doi:10.1038/505612a PubMedGoogle ScholarCrossref
4.
National Institutes of Health. ACD Working Group on Enhancing Rigor, Transparency, and Translatability in Animal Research: final report. Published June 11, 2021. Accessed July 3, 2022. https://www.acd.od.nih.gov/documents/presentations/06112021_RR-AR%20Report.pdf
5.
Button  KS, Ioannidis  JP, Mokrysz  C,  et al.  Power failure: why small sample size undermines the reliability of neuroscience.   Nat Rev Neurosci. 2013;14(5):365-376. doi:10.1038/nrn3475 PubMedGoogle ScholarCrossref
6.
Ramirez  FD, Motazedian  P, Jung  RG,  et al.  Methodological rigor in preclinical cardiovascular studies: targets to enhance reproducibility and promote research translation.   Circ Res. 2017;120(12):1916-1926. doi:10.1161/CIRCRESAHA.117.310628 PubMedGoogle ScholarCrossref
7.
National Institutes of Health. Alzheimer’s Disease Preclinical Efficacy Database: analytics. Published 2022. Accessed July 3, 2022. https://alzped.nia.nih.gov/alzped-analytics
×