[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
Purchase Options:
[Skip to Content Landing]
Views 16,968
Citations 0
Viewpoint
November 19, 2018

Protecting the Value of Medical Science in the Age of Social Media and “Fake News”

Author Affiliations
  • 1Penn Medicine Center for Health Care Innovation, University of Pennsylvania Perelman School of Medicine, Philadelphia
  • 2Department of Emergency Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia
  • 3Center for Health Equity Research and Promotion, Philadelphia Veterans Affairs Medical Center, Philadelphia, Pennsylvania
JAMA. Published online November 19, 2018. doi:10.1001/jama.2018.18416

New threats to effective scientific communication make it more difficult to separate science from science fiction. Patients can be harmed by misinformation or by misplaced trust; for example, patients with cancer using complementary medicine are more likely than patients not using it to refuse evidence-based therapies and have higher mortality.1 Researchers who produce objective science can no longer focus on simply disseminating the message. Now they must also defend that evidence from challenges to the validity and interpretation of their research and, at times, be proactive to ensure that unsubstantiated messages do not compete with the correct message. For instance, the unfounded, and yet persistent, beliefs linking autism with vaccination demonstrates both the health dangers of misinformation and the effort required to counteract that misinformation. The adversarial stance seems destined to decrease trust in the scientific enterprise, but the alternatives seem worse.

Contributing Factors

Three related factors contribute to current circumstances. One is the rapid decrease in the cost of publishing information. When getting information to the public was expensive, communication could come only from governments or highly resourced private interests. Communication also came from publishing houses that developed editorial processes to protect the value of their capital investments. These organizations could communicate correct or incorrect information as they saw fit, but there were fewer of them, and they were typically identifiable, which made their biases easier to understand. Now, anyone can Tweet or post on Facebook. Social media is indeed democratizing, but its novel dynamics allow strategic content to infiltrate trusted social networks, posing and propagating as influential commentary.

Second is the increasing ability to select what information is heard. When the public was restricted to the local newspaper or radio station, everyone heard the same thing. The powerful urge to favor information that confirms prior views paired with a new ability to filter out the alternative creates the echo chamber of contemporary media. Twitter accounts presumed to be bots have generated positive online sentiment about the use of e-cigarettes.2 Clinicians and scientists are also vulnerable, with the increased ability to selectively expose themselves to confirming evidence.

Third, and more recently, is that the ubiquity of misinformation has created a tool to perpetuate it. Opponents of the content of a report or a message need only decry it as “fake news” to invoke a conspiracy against that content. This single phrase almost seems to initiate an anamnestic response among those disinclined to accept or believe the content, automating cascades of disbelief and dismissal. Misinformation has no constraints and can be strategically designed for spread. For instance, false information about the Zika pandemic had greater uptake than accurate posts.3

Social media has created an unprecedented ability to spread sentiment and exert influence. Individuals exposed to fewer positive expressions on social media are then more likely to post fewer positive expressions on social media.4 The world has been alarmed at revelations of the politically motivated release of misinformation through social media channels and the reach that information has achieved. Science and health are just as vulnerable to strategic manipulation.

Countermeasures

How can scientists and institutions that communicate scientific information anticipate and respond to these threats to their value? What countermeasures can they deploy?

Provenance

When accounts surfaced about the use of Facebook to influence political thought, the evidence was largely from 2 avenues: either by revealing the identity and motives of groups who have used these media strategically or by revealing the provenance of specific messages that have propagated through it. The ability to credit information from scientific journals, and, in turn, to discredit information without such sources, is perhaps the most conventional countermeasure to misinformation. Information from journals usually comes with explicit identification of sources and their conflicts of interest, and is curated through peer review.

Although each of these steps occasionally fails, journals offer provenance structurally designed for the precise purpose of separating fiction from nonfiction and helping readers understand the difference. Because of this critical role, journals may be the ally in greatest need of support. If the peer review process did not exist, one of the first actions scientists would take to counteract misinformation would be to invent it. Thus, it is surprising that some scientists are now embracing preprint publication that eliminates many of the protections between the creation of information and its dissemination.5 Science may benefit more from strengthening the reality and perception of its review than sacrificing these factors for the sake of speed.

Engagement

Scientists engaging thoughtfully on social media is important but incomplete. Uncoordinated efforts of individual scientists cannot take on resourced interests with fleets of bots. The bot capable of reaching millions will generate more messages and activity than the researcher with 1000 followers every time. What is needed is a campaign, engaging the platforms that patients use. In some cases, fake news could be seen as a teachable moment and an opportunity for researchers to clarify scientific findings. Significant occurrences of misinformation may require stronger responses. Recently, the task has required defending good research from attack. The more aggressive stance is disabling misinformers. However, because moving down that slope puts credibility at risk, evidence-based organizations that trade most on that credibility must consider those risks.

Transparency

One element that makes misinformation so potent is that it can target those who are most receptive to the information. Precision marketing recapitulates precision medicine. When individuals share their symptoms, diet, medication usage, and medical histories, they leave enough digital residue to define a targetable persona. Facebook posts can be used to predict a diagnosis of depression.6 Because there is an increased focus in research to return findings to patients,7 there could also be a concerted focus to assist patient access to the information underlying these personas and how those personas may distort their world view.

Narrative

Evocative stories are typically far more emotionally persuasive than multiple tables reporting systematic findings. In experimental settings, participants randomized to read about a person who is experiencing poverty donated more money than those randomized to read ostensibly more systematic and objective statistics reflecting the broad extent of that poverty; however, more concerning is that participants randomized to read both donated an amount intermediate to the other 2 groups.8 Evocative anecdotes are not necessarily more emotionally persuasive than systematic data, but instead data often weaken emotional appeal rather than strengthen it.

Reputation

Social media is leaving peer-reviewed communication behind as some scientists begin to worry less about their citation index (which takes years to develop) and more about their Twitter response (measurable in hours). Science is not supposed to be a popularity contest and yet humans delight in competitive rankings. Published college rankings have used more dimensionalized criteria to unseat what are, literally, old schools. At the same time, the organizations that produce such rankings may have merely substituted their own metrics to elevate themselves rather than the cause of higher education. Some journals now link to aggregators like Altmetric, which report Tweets about articles with the immediacy of stock tickers. The appeal is irresistible: Altmetric ratings deliver fame in 15-minute doses. Like the college rankings, these alternative metrics broaden the understanding of the value of a scientific contribution. One approach is to develop additional indices that offer immediacy and yet are not so subject to flights of fancy.

Conclusion

Scientific information and misinformation are amplified through social media. As those channels become vulnerable to scientific integrity, there are opportunities to develop countermeasures and specific strategies for vigilance and response.

Back to top
Article Information

Corresponding Author: Raina M. Merchant, MD, MSHP, University of Pennsylvania, 3400 Civic Center Blvd, Philadelphia, PA 19104 (raina.merchant@uphs.upenn.edu).

Published Online: November 19, 2018. doi:10.1001/jama.2018.18416

Conflict of Interest Disclosures: Dr Asch reported being a partner and part-owner of VAL Health, a behavioral economics consulting firm. No other disclosures were reported.

References
1.
Johnson  SB, Park  HS, Gross  CP, Yu  JB.  Complementary medicine, refusal of conventional cancer therapy, and survival among patients with curable cancers.  JAMA Oncol. 2018;4(10):1375-1381. doi:10.1001/jamaoncol.2018.2487PubMedGoogle ScholarCrossref
2.
Martinez  LS, Hughes  S, Walsh-Buhi  ER, Tsou  MH.  “okay, we get it: you vape”: an analysis of geocoded content, context, and sentiment regarding e-cigarettes on Twitter.  J Health Commun. 2018;6:1-13.PubMedGoogle Scholar
3.
Sharma  M, Yadav  K, Yadav  N, Ferdinand  KC.  Zika virus pandemic-analysis of Facebook as a social media health information platform.  Am J Infect Control. 2017;45(3):301-302. doi:10.1016/j.ajic.2016.08.022PubMedGoogle ScholarCrossref
4.
Kramer  AD, Guillory  JE, Hancock  JT.  Experimental evidence of massive-scale emotional contagion through social networks.  Proc Natl Acad Sci U S A. 2014;111(24):8788-8790. doi:10.1073/pnas.1320040111PubMedGoogle ScholarCrossref
5.
Kaiser  J.  The preprint dilemma.  Science. 2017;357(6358):1344-1349. doi:10.1126/science.357.6358.1344PubMedGoogle ScholarCrossref
6.
Eichstaedt  JC, Smith  RJ, Merchant  RM,  et al.  Facebook language predicts depression in medical records.  Proc Natl Acad Sci U S A. 2018;115(44):11203-11208. doi:10.1073/pnas.1802331115PubMedGoogle ScholarCrossref
7.
Wong  CA, Hernandez  AF, Califf  RM.  Return of research results to study participants: uncharted and untested.  JAMA. 2018;320(5):435-436. doi:10.1001/jama.2018.7898PubMedGoogle ScholarCrossref
8.
Peters  E, Romer  D, Slovic  P,  et al.  The impact and acceptability of Canadian-style cigarette warning labels among US smokers and nonsmokers.  Nicotine Tob Res. 2007;9(4):473-481. doi:10.1080/14622200701239639PubMedGoogle ScholarCrossref
×