Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2018
DOI: 10.18653/v1/p18-2123
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Target Stance Classification with Self-Attention Networks

Abstract: In stance classification, the target on which the stance is made defines the boundary of the task, and a classifier is usually trained for prediction on the same target. In this work, we explore the potential for generalizing classifiers between different targets, and propose a neural model that can apply what has been learned from a source target to a destination target. We show that our model can find useful information shared between relevant targets which improves generalization in certain scenarios.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
52
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(63 citation statements)
references
References 8 publications
(9 reference statements)
2
52
0
1
Order By: Relevance
“…As our approach is not limited to specific tasks, it is interesting to validate our model in other tasks, such as reading comprehension, language inference, and stance classification (Xu et al, 2018). Another promising direction is to design more powerful localness modeling techniques, such as incorporating linguistic knowledge (e.g.…”
Section: Resultsmentioning
confidence: 99%
“…As our approach is not limited to specific tasks, it is interesting to validate our model in other tasks, such as reading comprehension, language inference, and stance classification (Xu et al, 2018). Another promising direction is to design more powerful localness modeling techniques, such as incorporating linguistic knowledge (e.g.…”
Section: Resultsmentioning
confidence: 99%
“…Many previous models for stance detection trained an individual classifier for each topic (Lin et al, 2006;Beigman Klebanov et al, 2010;Sridhar et al, 2015;Somasundaran and Wiebe, 2010;Hasan and Ng, 2013;Li et al, 2018;Hasan and Ng, 2014) or for a small number of topics common to both the training and evaluation sets (Faulkner, 2014;Du et al, 2017). In addition, a handful of models for the TwitterStance dataset have been designed for cross-target stance detection (Augenstein et al, 2016;Xu et al, 2018), including a number of weakly supervised methods using unlabeled data related to the test topic (Zarrella and Marsh, 2016;Wei et al, 2016;Dias and Becker, 2016). In contrast, our models are trained jointly for all topics and are evaluated for zero-shot stance detection on a large number of new test topics (i.e., none of the zero-shot test topics occur in the training data).…”
Section: Related Workmentioning
confidence: 99%
“…Based on the problem setting and the corpus, dozens of work such as stance detection based on standard supervised learning [31]- [37], [40]- [49] and weakly supervised stance detection [32], [33], [38], [39], [49], [50], have been devised recently. Table 4 describes the core techniques and the statistical results of related models.…”
Section: ) Problem Settingsmentioning
confidence: 99%
“…For example, the tweet ''Jeb bush is the only sane candidate in this republican lineup, I support him'' will be assigned positive by sentiment analysis [140], [143], but extracted with 'against' stance to the topic ''Donald Trump as President'' by stance detection. Research on stance detection can be categorized into four groups based on debate settings, such as congressional floor debates [6]- [9], company-internal discussions [10], [11], online forums ideological debates [12]- [27] and hot-event oriented debates on social media [28]- [50]. The latter two are open domain and flexible, therefore more challengeable.…”
Section: Introductionmentioning
confidence: 99%