Customize your JAMA Network experience by selecting one or more topics from the list below.
During the global SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) pandemic, disseminating study findings (such as by preprints, news releases, news stories, or social media) and publishing the results of studies of testing and treatment in scientific journals increased quickly. As of July 6, a PubMed search for COVID-19 yielded almost 30 000 reports.
In the context of a rapid increase in cases, hospitalizations, and deaths from COVID-19 (coronavirus disease 2019), clinicians, patients, policy makers, and the public at large are understandably eager for results of studies of prognosis, diagnosis, and treatment. Because of the urgency to implement the findings of research to stem the pandemic and its effects, clear and complete communication of study results is even more important than usual.
Communication Flaws and Failures
Government reports, journalism, talk shows, and public relations news releases from industry and academic institutions have often failed to communicate the results of studies well, and these failures have important consequences.
Failures of communication include (1) a focus on single study results without the context of other studies or acknowledgment that single studies are rarely definitive; (2) overemphasis on results, particularly relative effects, without recognition of important limitations; and (3) communications based on incomplete reports of studies and reports of studies that have not been adequately reviewed. Communications regarding studies involving remdesivir, dexamethasone, and hydroxychloroquine illustrate some of these issues.
In early April 2020, a published study found that in a cohort of 53 patients with COVID-19 who required oxygen and were treated with remdesivir, 36 had improvements in the level of oxygen support needed, 13 were not able to be extubated, and 7 died during a median follow-up of 18 days.1 Although a news release from the manufacturer of the medication mentioned limitations and stated that safety and efficacy were unknown, the headline of the release stated, “Remdesivir treatment resulted in clinical improvement.”2 A statement that strongly suggests cause and effect is an inappropriate description of the results of a small observational study.
On April 29, 2020, a news release from the National Institutes of Health (NIH) announcing findings from a large randomized placebo-controlled clinical trial (supported by the National Institute of Allergy and Infectious Diseases [NIAID]) that included 1063 patients hospitalized with COVID-19 highlighted faster recovery (median, 11 days vs 15 days to hospital discharge or resumption of normal activity) and a mortality benefit (8% vs 12%, not statistically significant) for those taking remdesivir.3 There was no accompanying (peer-reviewed) published article. That same day, a smaller trial of 237 hospitalized patients in China was published, reporting no difference between remdesivir and placebo in time to clinical improvement overall. Although not statistically significant, a faster time to clinical improvement was observed in a subgroup treated within 10 days of symptom onset. At a press conference on the day of the NIH news release, remdesivir was described as a “new standard of care.”
A month later the NIAID-sponsored article was published4 and described the outcome differently (as time to either hospital discharge or hospitalization for infection control purposes only), noting that the outcome had changed from that originally reported in the trial registration, something not revealed in the prior announcement a month earlier.
In June, a university news release reported results of dexamethasone as 1 of 6 possible treatments tested in a randomized trial: a 17% reduction in 28-day mortality in 6425 hospitalized patients with COVID-19.5 The news release reported subgroup analyses but did not report the number or proportion of deaths by randomized group. The World Health Organization called dexamethasone a “lifesaving scientific breakthrough.”6 A New York Times headline on June 16, 2020, emphasized causality: “Common drug reduces coronavirus deaths.” The source for these announcements was a news release, not an abstract, preprint, or peer-reviewed article. Steroids, including dexamethasone, have been studied for community-acquired pneumonia, with earlier studies suggesting mortality benefit that was not confirmed in later systematic reviews.
The story of hydroxychloroquine for COVID-19 seems long and complex despite being only a few months old. In short, on March 20, 2020, a small open-label trial reported that 14 of 20 hydroxychloroquine-treated patients vs 2 of 16 control patients with initially positive swabs tested negative by nasopharyngeal polymerase chain reaction on day 6 of treatment.7 The US president announced he had taken the drug and promoted its use, the US Food and Drug Administration (FDA) provided an Emergency Use Authorization, many used the medication (leading to a shortage), and the US stockpiled 63 million doses. Other studies then showed lack of efficacy.8
Subsequently, a randomized trial involving 821 patients found lack of efficacy of postexposure prophylaxis,9 the FDA withdrew its Emergency Use Authorization, and the NIH halted a randomized trial in progress for unlikely benefit. Two weeks later, an observational study (subject to the confounding of not treating those with a poor prognosis) of 2541 hospitalized patients reported the medication was associated with lower mortality (14% vs 26% among those who did not receive the drug).10 The authors urged caution and called for a randomized trial. The media reported that the drug cut the death rate significantly and helped patients “survive better.” News stories and social media reports took readers on a roller-coaster ride, alternately reporting efficacy, lack of efficacy, and harm, reporting dutifully on the results of each latest study.
News releases and news reports with simple, often provocative messages based on single studies have had substantial influence on medication use, the stock market, political discourse, and policy. A more informed and rational medical, political, and economic decision-making process may have occurred if attention had been given to a few recommendations.
News reports of single studies should be matter-of-fact and favor reporting of main outcomes and absolute risks, specify patient populations, and highlight limitations in validity and generalizability. All such reports should include a note of caution that single studies are rarely definitive. They should also include the views of other independent experts in the field without conflicts of interest. Included in those expert views should be context, including comparison with the findings from other studies (of the reported and other treatments), the relative weight that should be given to the current vs other studies, and how such treatments are usually studied for the disease in question.
It is important that complexity be mentioned and considered even if it is not popular among lay readers. Rather than focusing solely on claims that a treatment works, reports should point out the many ways that efficacy might be measured and which measurements the particular study did and did not report. Independent experts should be sought to discuss relevant complexity. With COVID-19, this might include the idea that hospitalized patients who receive mechanical ventilation are different from outpatients, that prophylaxis is different from treatment, and that treatments may work (or not) differently depending on the stage or severity of disease.
Reports of studies that are not based on full manuscripts should be particularly circumspect, and any study results announced only by news release should be reported with an abundance of caution and caveats, in headlines and throughout the text of stories. News reports based on results announced in news releases should include recommendations that any practice or policy change should await scrutiny of publication of complete data from the study. These recommendations should also lead news organizations to question why the findings announced prior to publication of the peer-reviewed article are newsworthy in that moment.
Trust—and Avoidable Harm
The COVID-19 pandemic has created perhaps the most challenging time for science communication in decades. Races are underway in parallel: to find answers to perplexing coronavirus questions, to announce research findings to clinical and scientific colleagues, and to report those findings to a confused and concerned global audience. There are no winners in these races if harm—even though unintentional—is wrought by the dissemination of hurried, incomplete, biased misinformation. Trust in science, medicine, public relations, and journalism may be in jeopardy in the intersection where these professions meet.
Time—even a few moments daily—can help prevent harm. Any professional communicating about this pandemic should spend such time to reflect on how the words and the data matter, and then act accordingly.
Corresponding Author: Richard Saitz, MD, MPH, Department of Community Health Sciences, Boston University School of Public Health, 801 Massachusetts Ave, Fourth Floor, Boston, MA 02118 (Richard.Saitz@jamanetwork.org).
Published Online: July 13, 2020. doi:10.1001/jama.2020.12535
Conflict of Interest Disclosures: Dr Saitz reported receiving nonfinancial support from Alkermes; grants from the NIH (National Institute on Drug Abuse, National Institute on Alcohol Abuse and Alcoholism), the Patient-Centered Outcomes Research Institute (PCORI) via Philadelphia College of Osteopathic Medicine, PCORI via Public Health Management Corporation, Burroughs Wellcome Fund, and McLean Hospital; personal fees from the American Society of Addiction Medicine, National Council on Behavioral Healthcare, Leed Management Consulting Inc, Kaiser Permanente, UpToDate/Wolters Kluwer, Massachusetts Medical Society, Yale University, Group Health Inc, Charles University, National Committee on Quality Assurance, University of Oregon, Brandeis University, Karolinska Institutet, American Academy of Addiction Psychiatry, Partners, and Harvard Medical School; serving as president of the International Society of Addiction Journal Editors; editor of a book published by Springer; editor in chief of Journal of Addiction Medicine; serving on the editorial board of Journal of Addictive Diseases and Addiction Science & Clinical Practice; serving as an expert witness in malpractice cases related to alcohol and other drug use disorders; and consulting for CheckUp & Choices and ABT Corporation (not remunerated). Mr Schwitzer reported no disclosures.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Saitz R, Schwitzer G. Communicating Science in the Time of a Pandemic. JAMA. 2020;324(5):443–444. doi:10.1001/jama.2020.12535
Coronavirus Resource Center
Create a personal account or sign in to: