Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.291
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Cross-target Stance Detection with Transferable Semantic-Emotion Knowledge

Abstract: Stance detection is an important task, which aims to classify the attitude of an opinionated text towards a given target. Remarkable success has been achieved when sufficient labeled training data is available. However, annotating sufficient data is labor-intensive, which establishes significant barriers for generalizing the stance classifier to the data with new targets. In this paper, we proposed a Semantic-Emotion Knowledge Transferring (SEKT) model for cross-target stance detection, which uses the external… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
78
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 75 publications
(79 citation statements)
references
References 21 publications
1
78
0
Order By: Relevance
“…Baselines We compare against a BERT (Devlin et al, 2019) baseline that encodes the document and topic jointly for classification, as in Allaway and McKeown (2020) and BiCondbidirectional conditional encoding ( §2.2) without attention (Augenstein et al, 2016). Additionally, we compare against published results from three prior models: SEKT -using a knowledge graph to improve topic transfer (Zhang et al, 2020), VTNadversarial learning with a topic-oriented memory network, and CrossN -BiCond with an additional topic-specific self-attention layer (Xu et al, 2018).…”
Section: Adversarial Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…Baselines We compare against a BERT (Devlin et al, 2019) baseline that encodes the document and topic jointly for classification, as in Allaway and McKeown (2020) and BiCondbidirectional conditional encoding ( §2.2) without attention (Augenstein et al, 2016). Additionally, we compare against published results from three prior models: SEKT -using a knowledge graph to improve topic transfer (Zhang et al, 2020), VTNadversarial learning with a topic-oriented memory network, and CrossN -BiCond with an additional topic-specific self-attention layer (Xu et al, 2018).…”
Section: Adversarial Trainingmentioning
confidence: 99%
“…(DT), for evaluation and did not experiment with others. Furthermore, recent work on SemT6 has focused on cross-target stance detection (Xu et al, 2018;Wei and Mao, 2019;Zhang et al, 2020): training on one topic and evaluating on one different unseeen topic that has a known relationship with the training topic (e.g., "legalization of abortion" to "feminist movement"). These models are typically evaluated on four different test topics (each with a different training topic).…”
Section: Introductionmentioning
confidence: 99%
“…To employ transferable topic knowledge from source targets to destination targets, Wei and Mao [32] learned latent topics with neural variational inference [16,24] to enhance text representations and adopted adversarial training technique to learn more target-invariant representations. Zhang et al [37] employed external semantic and emotion knowledge as a bridge to enable knowledge to transfer across different targets and enrich the representation learning of the text and target. These works partially extract transferable stance features from source targets to destination targets, while they always ignore the learning of the most rudimentary word-level pragmatics dependencies information across different targets.…”
Section: Related Work 21 Stance Detectionmentioning
confidence: 99%
“…Cross-target stance detection aims to build stance classifiers trained on features extracted from the context of source targets which might be relevant to the destination targets, so as to alleviate the sparsity or lack of annotated data for the stance detection of destination targets. Some recent studies have been adopted to address cross-target stance detection [32,34,37]. These methods either leverage shared features for stance detection of destination targets by way of modeling the topical information with source targets [32,34] or incorporate external knowledge between source and destination targets into model learning [37].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation