The emergence of the 2019 novel coronavirus has led to more than a pandemic-indeed, COVID-19 is spawning myriad other concerns as it rapidly marches around the globe. One of these concerns is a surge of misinformation, which we argue should be viewed as a risk in its own right, and to which insights from decades of risk communication research must be applied. Further, when the subject of misinformation is itself a risk, as in the case of COVID-19, we argue for the utility of viewing the problem as a multi-layered risk communication problem. In such circumstances, misinformation functions as a meta-risk that interacts with and complicates publics' perceptions of the original risk. Therefore, as the COVID-19 "misinfodemic" intensifies, risk communication research should inform the efforts of key risk communicators. To this end, we discuss the implications of risk research for efforts to factcheck COVID-19 misinformation and offer practical recommendations.
The COVID-19 pandemic went hand in hand with what some have called a “(mis)infodemic” about the virus on social media. Drawing on partisan motivated reasoning and partisan selective sharing, this study examines the influence of political viewpoints, anxiety, and the interactions of the two on believing and willingness to share false, corrective, and accurate claims about COVID-19 on social media. A large-scale 2 (emotion: anxiety vs relaxation) × 2 (slant of news outlet: MSNBC vs Fox News) experimental design with 719 US participants shows that anxiety is a driving factor in belief in and willingness to share claims of any type. Especially for Republicans, a state of heightened anxiety leads them to believe and share more claims. Our findings expand research on partisan motivated reasoning and selective sharing in online settings, and enhance the understanding of how anxiety shapes individuals’ processing of risk-related claims in issue contexts with high uncertainty.
Advances in gene editing technologies for human, plant, and animal applications have led to calls from bench and social scientists, as well as a wide variety of societal stakeholders, for broad public engagement in the decision-making about these new technologies. Unfortunately, there is limited understanding among the groups calling for public engagement on CRISPR and other emerging technologies about 1) the goals of this engagement, 2) the modes of engagement and what we know from systematic social scientific evaluations about their effectiveness, and 3) how to connect the products of these engagement exercises to societal decision or policy making. Addressing all three areas, we systematize common goals, principles, and modalities of public engagement. We evaluate empirically the likely successes of various modalities. Finally, we outline three pathways forward that deserve close attention from the scientific community as we navigate the world of Life 2.0.
Recently, the Surgeon General released an "Advisory on Building a Healthy Information Environment" to confront health misinformation (Office of the Surgeon General, 2021), and a coalition of groups called for U.S. President Joe Biden to form a task force that could address mis-and disinformation, with a mandate to deliver "a comprehensive set of principles and overall policy, funding, and legislative recommendations" for addressing pernicious falsehoods (Fried, 2021). This call cited the incredible loss of life in the COVID-19 pandemic and the recent insurrection at the U.S. Capitol as examples of the "clear and present threat" that "deceptions" now pose, reflecting a declaration by the World Health Organization (WHO) that we are facing an "infodemic" of misinformation.Ironically, it is not clear that claims of a "novel" misinformation crisis are empirically supported. Yet, this has not stopped scholars across many disciplines from fervently researching interventions to counter the spread and prevent the uptake of false information and to correct existing incorrect views among audiences. As a result, social science finds itself in a situation where we are operating with incomplete information, at best. At worst, we are operating from inaccurate assumptions when it comes to diagnosing alleged infodemics, understanding the mechanisms that have created or perpetuated it, and determining the need for and efficacy of scalable interventions in the real world. Our Renewed Fascination With an Old ProblemMuch of the perceived urgency of the current crisis of misinformation, as seen by some, is based on two assumptions. First, groups like the WHO and others have found very receptive audiences among academics and policy makers for their argument that we are seeing misinformation and disinformation at a much larger scale than we have historically witnessed. Second, the urgency with which interventions are proposed and promoted, regardless of the social scientific evidence base surrounding them, seems to be fed by the assumption that the alleged infodemic significantly distorts attitudes and behaviors of a citizenry that would otherwise hold issue and policy stances that are consistent with the best available scientific evidence. Unfortunately, both assumptions have limited foundations in the social scientific literature. Misinformation as a Historical By-product of Liberal DemocraciesEven the most fervent crusaders against an alleged "misinformation crisis" would probably concede that disinformation
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.