2021
DOI: 10.1609/icwsm.v15i1.18103
|View full text |Cite
|
Sign up to set email alerts
|

Misinformation Adoption or Rejection in the Era of COVID-19

Abstract: The COVID-19 pandemic has led to a misinformation avalanche on social media, which produced confusion and insecurity in netizens. Learning how to automatically recognize adoption or rejection of misinformation about COVID-19 enables the understanding of the effects of exposure to misinformation and the threats it presents. By casting the problem of recognizing misinformation adoption or rejection as stance classification, we have designed a neural language processing system operating on micro-blogs which take… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…The STANCEID-BASELINE system utilizes the "[CLS]" embedding from COVID-Twitter-BERT-v2 as the framing stance recognition input embedding z. The STANCEID system utilizes Lexical, Emotion, and Semantic Graph Attention Networks to produce the framing stance recognition input embedding z (Weinzierl, Hopfer, and Harabagiu 2021). The STANCEID-MORALITY system, described in Section 4 and illustrated in Figure 4, utilizes Lexical, Emotion, and Semantic Graph Attention Networks along with Hopfield Pooling of Moral Foundations to perform framing stance recognition.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The STANCEID-BASELINE system utilizes the "[CLS]" embedding from COVID-Twitter-BERT-v2 as the framing stance recognition input embedding z. The STANCEID system utilizes Lexical, Emotion, and Semantic Graph Attention Networks to produce the framing stance recognition input embedding z (Weinzierl, Hopfer, and Harabagiu 2021). The STANCEID-MORALITY system, described in Section 4 and illustrated in Figure 4, utilizes Lexical, Emotion, and Semantic Graph Attention Networks along with Hopfield Pooling of Moral Foundations to perform framing stance recognition.…”
Section: Resultsmentioning
confidence: 99%
“…Hyperparameters were selected based on initial experiments on the training and development collections of COVAXFRAMES. All system hyperparameters follow those of Weinzierl, Hopfer, and Harabagiu (2021), while STANCEID-MORALITY also performs Hopfield Pooling with Moral Foundations with p = 6, has a GAT hidden size F = 32, and d = 3 stacked GAT layers. All systems follow the same training schedule: 10 epochs, a linearly decayed learning rate of 5e − 4 with a warm-up for 10% of training steps, and an attention drop-out 3.…”
Section: Resultsmentioning
confidence: 99%
“…One class involves computational methods to detect misinformation . These computerbased approaches have shown remarkable success, leveraging signals such as sharing patterns (Rosenfeld, Szanto, and Parkes 2020), text features (Granik and Mesyura 2017), account activity (Breuer, Eilat, and Weinsberg 2020), user stance (Weinzierl, Hopfer, and Harabagiu 2021) and vi-sual features for website screenshots (Abdali et al 2021). However, the nuanced nature of truth, the limited availability of labeled training data (Rubin, Chen, and Conroy 2015;Bozarth, Saraf, and Budak 2020), and the nonstationarity problem whereby the signatures of misinformation can change rapidly (e.g.…”
Section: Approaches To Detecting Misinformationmentioning
confidence: 99%
“…In addition, methods employing professional fact-checkers suffer from a general lack of bi-partisan trust: one study from the Pew Research Center found that 70% of Republicans and 48% of Americans say fact-checking efforts tend to "favor one side" (Walker and Gottfried 2019). Another proposed alternative is fully algorithmic methods for detecting misinformation Abdali et al 2021;Weinzierl, Hopfer, and Harabagiu 2021) (see Related Work for a review). However, these methods fundamentally struggle to keep up with the the non-stationarity of misinformation, and require ground-truth labeling.…”
Section: Introductionmentioning
confidence: 99%
“…There have been different approaches proposed to deal with this growing challenge. For example, researchers have been developing effective methods to automatically detect online misinformation (Gatto, Basak, and Preum 2023;Weinzierl, Hopfer, and Harabagiu 2021;Shu, Wang, and Liu 2019). This is used to support expert fact-checkers who would be otherwise overwhelmed by the amount of content to be fact-checked.…”
Section: Introductionmentioning
confidence: 99%