Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1675
|View full text |Cite
|
Sign up to set email alerts
|

STANCY: Stance Classification Based on Consistency Cues

Abstract: Controversial claims are abundant in online media and discussion forums.A better understanding of such claims requires analyzing them from different perspectives. Stance classification is a necessary step for inferring these perspectives in terms of supporting or opposing the claim. In this work, we present a neural network model for stance classification leveraging BERT representations and augmenting them with a novel consistency constraint. Experiments on the Perspectrum dataset, consisting of claims and use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
1
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(32 citation statements)
references
References 17 publications
0
28
1
1
Order By: Relevance
“…The goal of our stance extractor is to model the stance of the user reply R i to the post P automatically, where R i is the i th reply of post P . Motivated by recent stance classification works [17,18], we firstly construct post-reply pairs, and concatenate each pair into a single sentence by using a special classification token [CLS] and two separator token [SEP], as shown in Fig 2 . Then We feed each concatenated post-reply pair into BERT 1 , and the final hidden state representation corresponding to the [CLS] token is used as the stance representation X P |Ri ∈ R H , where H = 768. Specifically, the BERT model we use here consists of 12 layers and 12 heads, which is pre-trained on lower-cased English text, that is,…”
Section: Stance Extractormentioning
confidence: 99%
“…The goal of our stance extractor is to model the stance of the user reply R i to the post P automatically, where R i is the i th reply of post P . Motivated by recent stance classification works [17,18], we firstly construct post-reply pairs, and concatenate each pair into a single sentence by using a special classification token [CLS] and two separator token [SEP], as shown in Fig 2 . Then We feed each concatenated post-reply pair into BERT 1 , and the final hidden state representation corresponding to the [CLS] token is used as the stance representation X P |Ri ∈ R H , where H = 768. Specifically, the BERT model we use here consists of 12 layers and 12 heads, which is pre-trained on lower-cased English text, that is,…”
Section: Stance Extractormentioning
confidence: 99%
“…49 out of its 947 claims are from procon (Chen et al, 2019). Claims and perspectives are short sentences and have been used for stance detection in (Popat et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…49 out of its 947 claims are from procon . Claims and perspectives are short sentences and have been used for stance detection in (Popat et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…There are also some neural network approaches that leverage lexical features (Riedel et al, 2017;Hanselowski et al, 2018). A consistency constraint is proposed to jointly model the topic and opinion using BERT architecture (Popat et al, 2019). It trains the whole massive network for label prediction.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation