In 1901 and 1902 a Massachusetts physician, Duncan MacDougall, conducted an experiment that involved weighing 6 terminally ill patients immediately before and after being pronounced dead.1 Dr MacDougall recorded a reduction in 1 patient’s weight of 21 g, which he believed represented that of the departed soul, while excluding the data from the other 5 patients for various reasons. He reported this finding in an article titled “Hypothesis Concerning Soul Substance Together With Experimental Evidence of the Existence of Such Substance,”2 although he acknowledged that “a large number of experiments would require to be made before the matter can be proven beyond any possibility of error.”2 Despite this disclaimer, the study was reported in The New York Times and many other prominent publications. Although these observations were completely discredited by the scientific community, searching for “21 gram soul” on the internet today returns nearly 20 000 000 hits.
We now view this episode derisively, but the scientific lapses committed by MacDougall such as excluding potentially eligible patients, using unreliable measures, and selectively reporting results that are favorable to the investigator’s hypothesis remain distressingly common. Boutron and Ravaud3 have enumerated more than a dozen ways in which investigators have been prone to misrepresenting their results to appear more favorable, often collectively referred to as “spin.” At the time of their initial study, they found that more than half of the reports of randomized controlled trials they examined misrepresented the results. The issues they identified that are most relevant to clinical research include the following:
Changing objectives or hypothesis to conform to the results.
Not distinguishing prespecified from post hoc analyses.
Failing to report protocol deviations.
Selective reporting or focus on outcomes favorable to the study hypothesis, particularly statistically significant results.
Disregarding results that contradict initial hypotheses.
Misleading interpretation (eg, ignoring regression to the mean, confounding, or small effect size).
Misinterpreting a significant P value as a measure of effect, or lack of significance as indicative of equivalence or safety.
Unfounded extrapolation to a larger population or different setting.
A growing body of literature has documented the prevalence of these lapses in fields as diverse as radiology, cancer biomarkers, and diagnostic testing, not to mention basic sciences.4-6 Khan and colleagues7 performed a detailed analysis of cardiovascular trials, focusing on the tendency of authors to highlight differences that were numerically but not statistically different, while downplaying results that essentially showed no difference between treatment groups. They examined 93 clinical trials of cardiovascular interventions with no intervention effect on the primary outcome from 6 prominent general medical and cardiology journals and applied a subset of the nosology put forth by Boutron and Ravaud,3 focusing on 3 ways in which authors highlighted results that were not statistically significant. These included emphasizing secondary results such as within-group comparison, secondary outcomes, and subgroup or per-protocol analyses; interpreting statistically nonsignificant results of the primary outcomes as showing treatment equivalence or ruling out an adverse event; and describing the effect of the treatment as beneficial with or without acknowledging the statistically nonsignificant primary outcome. At least 1 of these problems was detected in 53 abstracts (57%; 95% CI, 47%-67%) and 62 main texts (67%; 95% CI, 57%-75%), although the degree of spin in the conclusions of the articles was generally deemed low level. Khan et al7 also found differences among journals in the frequency of misrepresentation, but the numbers were relatively small.
As editors of JAMA Network Open, we pay extremely close attention to accurate reporting. When we celebrate our first anniversary in May 2019, we will have had 2400 manuscripts submitted and will have published approximately 450 original scientific articles. We meet twice weekly to review and discuss submissions, and almost invariably there are 1 or more manuscripts for which the authors have placed their results in a more favorable light than we consider justified. We automatically reject unregistered clinical trials and assiduously review the protocols of registered trials to ensure that the findings are reported in a manner consistent with prespecified groups, outcomes, and effect sizes. For other types of study designs, our editorial staff scours manuscripts to expunge unsupported causal language and correct overinterpretation of results. Our external referees are also alert to these problems and frequently cite them in their reviews. In addition, we require authors to adhere to reporting criteria applicable to the study design used.
At the same time, we make every effort to embrace well-founded, negative findings as avidly as positive ones. Although negative trials typically do not attract as much attention, they are often critically important in countering erroneous inferences from earlier studies or debunking widely held misconceptions. On several occasions, we have requested that authors recast the conclusions of a manuscript from weakly positive to categorically negative.
The propensity of authors to report their work in the most favorable light is to some degree understandable given the years of work that are often devoted to a single investigation and the well-known bias of scientific journals and lay press toward positive results. It may also reflect confirmation bias that leads us to accept data that confirm our preconceptions and to reject data that do not. Nevertheless, failure to maintain a critical and dispassionate perspective is a disservice to the research community, funding agencies, the practicing physicians who must decide which treatments to use, and the lay public. The last, in particular, may not have the expertise to interpret the important subtleties of statistical testing and are necessarily reliant on authors and editors to convey results in a balanced and accurate manner. Failure to do so can lead to exuberant adoption of marginally effective, useless, or even harmful clinical interventions as well as create unfounded anxiety among patients.
Scientific methods are continually evolving and becoming ever more sophisticated. A stated mission of JAMA Network Open is to embrace cutting-edge science and innovation. Concomitantly, as is the case for the entire JAMA Network, we are committed to maintaining the highest standards in selecting, reviewing, and editing the manuscripts that we publish. In an era in which truth is seen as a scarce commodity, dedication to fair and responsible reporting of scientific results is essential to preserving trust in the clinical research enterprise.
Published: May 3, 2019. doi:10.1001/jamanetworkopen.2019.2553
Correction: This article was corrected on July 3, 2019, to fix a typographical error in the final sentence.
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Fihn SD. JAMA Network Open.
Corresponding Author: Stephan D. Fihn, MD, MPH, Department of Medicine, University of Washington, Box 359780, 325 Ninth Ave, Seattle, WA 98104 (email@example.com).
Conflict of Interest Disclosures: None reported.
Additional Contributions: I thank the editors and staff of JAMA Network Open for their review and helpful comments.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Fihn SD. Combating Misrepresentation of Research Findings. JAMA Netw Open. Published online May 03, 20192(5):e192553. doi:10.1001/jamanetworkopen.2019.2553
Customize your JAMA Network experience by selecting one or more topics from the list below.
Create a personal account or sign in to: