Millar and colleagues1 describe an analysis of the use of promotional language (“hype”) in abstracts of National Institutes of Health (NIH)–funded research projects. They extracted more than 900 000 abstracts from the NIH RePORTER (Research Portfolio Online Reporting Tools: Expenditures and Results) archive and measured the frequency of certain terms from 1985 to 2020. They found that the use of most promotional adjectives—such as novel, critical, and innovative—gradually and continuously increased, while the use of others—such as scalable and transformative—was essentially nonexistent in 1985 but later became much more common beginning in the late 2000s. The authors acknowledge that the existence of hype in grant applications is not surprising, given that “the genre is inherently promissory,” and go on to write that applicants “increasingly describe their work in subjective terms and rely on appeals to emotion.”
While effective communication has long been central to the conduct of science, this report by Millar and colleagues1 highlights how scientists convey the quality—in this case the anticipated quality—of their work. Not surprisingly, scientists believe that their work is and will be of high quality. As has been widely publicized, though, there is increasing concern about a high prevalence of poor-quality science, what is sometimes referred to as a reproducibility crisis or a systematic absence of rigor. The NIH has articulated its concerns. In 2012, Landis et al2 called for improvements in the reporting of randomization, blinding, sample-size calculation, and data management in preclinical research. In 2014, NIH leaders described initiatives to improve the rigor of science that the agency funds.3 Most recently, in 2021, an NIH Working Group issued a report on steps the agency and scientific community can take to enhance the rigor, transparency, and translatability of animal research.4
The report by Millar and colleagues1 raises the question as to whether there are alternatives to promotional adjectives to convey the novelty and rigor—or lack of rigor—of scientific proposals or reports. Bibliometric methods exist to distinguish science that is disruptive as opposed to developmental or incremental. There are approaches to enable a no-hype, high-quality study design. Examples include the preclinical Experimental Design Assistant, which some funding agencies require and others encourage, and the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statement for clinical trials. There are also tools to ensure rigorous conduct and reporting. These include preclinical registered reports, the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guideline, clinical trial registration, standardized clinical trial reports, and the CONSORT (Consolidated Standards of Reporting Trials) guideline.
Beyond these examples, others have undertaken to produce the equivalent of report cards for rigor. Button et al5 found a high prevalence of underpowered studies in preclinical and clinical neuroscience and described this as a “power failure” given the inherent likelihood that underpowered studies produce misleading invalid findings. Ramirez et al6 systematically coded thousands of published articles, finding low rates of reporting on randomization, masking, sample size estimation and of reporting on sex. At the NIH, the AlzPED (Alzheimer’s Disease Preclinical Efficacy Database) posts objective assessments of thousands of scientific reports across more than 20 domains. An analysis of more than 1000 studies found low rates of sample size calculation, blinding, and randomization.7 There are also established, well-accepted methods for grading the quality of clinical research studies that may be of interest for writers of systematic reviews and clinical guidelines.
Scientists may use promotional adjectives to describe their work—both work they propose and work they report—but, as Millar and colleagues1 imply, we as a scientific community need to transcend subjective language and appeals to emotion. Many scientific leaders and funders, including the NIH, are undertaking efforts to improve the rigor of scientific design, conduct, and reporting. Another tool to enhance scientific “truth in advertising” is the expectation, or even requirement, that scientists share their data and methods. The NIH is now implementing its Data Management and Sharing Policy, which will require scientists to prepare and abide by plans for managing and sharing their data according to established standards and in high-quality repositories. Some have referred to the policy as the equivalent of a sea change—someone else’s hype, not mine. Nonetheless, the NIH sees the implementation of this policy as an opportunity to shift the culture of science to one of greater transparency and quality.
The report by Millar and colleagues1 itself reflects a form of transparency. The authors took advantage of the transparency of the NIH RePORTER archive, and they have decided to share their data for others to use. Researchers might seek to explore the data to learn more about how scientists describe their proposals. For example, are certain kinds of scientists more or less likely to use promotional adjectives? How often does the hype in proposals match the rigor of published work? By thinking about these and other questions, all stakeholders in science might take the needed steps to transform hype to higher-quality science and health for all.
Published: August 25, 2022. doi:10.1001/jamanetworkopen.2022.28683
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2022 Lauer MS. JAMA Network Open.
Corresponding Author: Michael S. Lauer, MD, Office of Extramural Research, Office of the Director, National Institutes of Health, One Center Drive, Room 144, Bethesda, MD 20892 (email@example.com).
Conflict of Interest Disclosures: None reported.
Lauer MS. From Hype to High-Quality Research. JAMA Netw Open. 2022;5(8):e2228683. doi:10.1001/jamanetworkopen.2022.28683
Customize your JAMA Network experience by selecting one or more topics from the list below.