Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.511
|View full text |Cite
|
Sign up to set email alerts
|

Improving Stance Detection with Multi-Dataset Learning and Knowledge Distillation

Abstract: Stance detection determines whether the author of a text is in favor of, against or neutral to a specific target and provides valuable insights into important events such as legalization of abortion. Despite significant progress on this task, one of the remaining challenges is the scarcity of annotations. Besides, most previous works focused on a hardlabel training in which meaningful similarities among categories are discarded during training. To address these challenges, first, we evaluate a multi-target and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 49 publications
(61 reference statements)
0
10
0
Order By: Relevance
“…The low accuracy of clustering risks propagating errors, and the reliance on abundant unlabeled data makes unsupervised methods less adaptable to cross-target and zero-shot settings. Li et al [32] proposed incorporating LLM to generate the explanation of hashtags and enhance model performance. However, directly utilizing the knowledge from large models is restricted to their training corpus and may propagate errors.…”
Section: Incorporating Background Knowledgementioning
confidence: 99%
See 2 more Smart Citations
“…The low accuracy of clustering risks propagating errors, and the reliance on abundant unlabeled data makes unsupervised methods less adaptable to cross-target and zero-shot settings. Li et al [32] proposed incorporating LLM to generate the explanation of hashtags and enhance model performance. However, directly utilizing the knowledge from large models is restricted to their training corpus and may propagate errors.…”
Section: Incorporating Background Knowledgementioning
confidence: 99%
“…The result is shown in Table 5. Following previous work [6,32,41], we select a specific target as the test set, with the remaining task data as the training set. For example, we use →DT to denote DT as the test set, with the remaining targets (JB, SEM16-h and COV-h) as the training data.…”
Section: Cross-target Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…People have wide-ranging background knowledge regarding various targets and use it to infer the implicit stance in a statement. However, machines by default do not have such knowledge and previous works on stance detection (Allaway and McKeown, 2020;Allaway et al, 2021;Liang et al, 2021;Augenstein et al, 2016;Siddiqua et al, 2019;Sun et al, 2018;Li et al, 2021b;Hardalov et al, 2021) fail to incorporate such knowledge in modeling stances.…”
Section: Introductionmentioning
confidence: 99%
“…People have wide-ranging background knowledge regarding various targets and use it to infer the implicit stance in a statement. However, machines by default do not have such knowledge and previous works on stance detection (Allaway and McKeown, 2020;Allaway et al, 2021;Liang et al, 2021;Augenstein et al, 2016;Siddiqua et al, 2019;Sun et al, 2018;Li et al, 2021b;Hardalov et al, 2021) fail to incorporate such knowledge in modeling stances.…”
Section: Introductionmentioning
confidence: 99%