Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.278
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Zero-shot and Few-shot Stance Detection with Commonsense Knowledge Graph

Abstract: In this paper, we consider a realistic scenario on stance detection with more application potential, i.e., zero-shot and few-shot stance detection, which identifies stances for a wide range of topics with no or very few training examples. Conventional data-driven approaches are not applicable to the above zero-shot and few-shot scenarios. For human beings, commonsense knowledge is a crucial element of understanding and reasoning. In the absence of annotated data and cryptic expression of users' stance, we beli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
54
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(55 citation statements)
references
References 15 publications
1
54
0
Order By: Relevance
“…[2] adapted a target-specific stance detection dataset [27] to ZSSD, and deployed adversarial learning to extract target-invariant transformation features in ZSSD. Further, to exploit both the structural-level and semantic-level information of the relational knowledge, [24] proposed a commonsense knowledge enhanced graph model based on BERT [9] to cope with ZSSD.…”
Section: Related Work 21 Zero-shot Stance Detectionmentioning
confidence: 99%
See 3 more Smart Citations
“…[2] adapted a target-specific stance detection dataset [27] to ZSSD, and deployed adversarial learning to extract target-invariant transformation features in ZSSD. Further, to exploit both the structural-level and semantic-level information of the relational knowledge, [24] proposed a commonsense knowledge enhanced graph model based on BERT [9] to cope with ZSSD.…”
Section: Related Work 21 Zero-shot Stance Detectionmentioning
confidence: 99%
“…Table 4: Experimental results on three ZSSD datasets. The results with ♮ are retrieved from [1], with † are retrieved from [24], with ‡ are retrieved from [2], with ♯ are retrieved from [8], with ♭ are retrieved from [23]…”
Section: Comparison Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Zhang et al [45] proposed a semantic-emotion knowledge transferring model for cross-target stance detection, which used external knowledge as a bridge to enable knowledge transfer across different targets. Liu et al [19] introduced a commonsense knowledge enhanced model to exploit both the structural-level and semantic-level information of the relational knowledge. Besides, Zhang et al [46] leveraged multiple external knowledge bases as bridges to explicitly link potentially opinioned terms in texts to targets of interest.…”
Section: Stance Detectionmentioning
confidence: 99%