Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.376
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Enhanced Masked Language Model for Stance Detection

Abstract: Detecting stance on Twitter is especially challenging because of the short length of each tweet, the continuous coinage of new terminology and hashtags, and the deviation of sentence structure from standard prose. Finetuned language models using large-scale indomain data have been shown to be the new state-of-the-art for many NLP tasks, including stance detection. In this paper, we propose a novel BERT-based fine-tuning method that enhances the masked language model for stance detection. Instead of random toke… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(30 citation statements)
references
References 34 publications
0
19
0
Order By: Relevance
“…Previous studies of stance detection largely focus on target-specific stance detection, where the training and inference stages share the same pre-defined set of targets [3,11,19,22,33,36]. In previous research, a task similar to ZSSD is the cross-target stance detection, where the learning ability of the classifier is adaptive to the unseen but related targets in the light of training on a known one [23,41,43,45].…”
Section: Related Work 21 Zero-shot Stance Detectionmentioning
confidence: 99%
“…Previous studies of stance detection largely focus on target-specific stance detection, where the training and inference stages share the same pre-defined set of targets [3,11,19,22,33,36]. In previous research, a task similar to ZSSD is the cross-target stance detection, where the learning ability of the classifier is adaptive to the unseen but related targets in the light of training on a known one [23,41,43,45].…”
Section: Related Work 21 Zero-shot Stance Detectionmentioning
confidence: 99%
“…Moreover, the occurrence of sentimental content along with the entities also signal stances (Mohammad et al, 2016b). Therefore, we take a masking strategy that upsamples entity tokens (Sun et al, 2019;Guu et al, 2020;Kawintiranon and Singh, 2021) and sentiment words to be masked for the MLM objective, which improves from prior pretraining work that only considers article-level comparison (Baly et al, 2020).…”
Section: Entity-and Sentiment-aware Mlmmentioning
confidence: 99%
“…Inspired by prior research (Kawintiranon and Singh 2021), we pre-trained BERT on a large corpus of ∼5 M posts from Parler and added the most important stance tokens towards/against QAnon to the original BERT vocabulary. Overall, this should allow our model to capture Parler-specific language more accurately compared to using standard BERT for the downstream task of stance detection (Kawintiranon and Singh 2021). Subsequently, we finetuned the new language model on a sample of 1250 stance labeled posts and computed the average stance of a user towards QAnon (details are in our GitHub).…”
Section: Feature Extractionmentioning
confidence: 99%