2022
DOI: 10.1609/aaai.v36i10.21318
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-training

Abstract: The goal of stance detection is to determine the viewpoint expressed in a piece of text towards a target. These viewpoints or contexts are often expressed in many different languages depending on the user and the platform, which can be a local news outlet, a social media platform, a news forum, etc. Most research on stance detection, however, has been limited to working with a single language and on a few limited targets, with little work on cross-lingual stance detection. Moreover, non-English sources of labe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 55 publications
(65 reference statements)
0
20
1
Order By: Relevance
“…Further on, we would like to investigate the use of zero-shot LWAN methods (Rios and Kavuluru, 2018;Chalkidis et al, 2020a), which currently harm averaged performance in favor of improved worst case performance. Label encodings based on contextualized word representations generated by pre-trained language models (Hardalov et al, 2021) may mitigate the effect of using non-contextualized ones (e.g., Word2Vec).…”
Section: Discussionmentioning
confidence: 99%
“…Further on, we would like to investigate the use of zero-shot LWAN methods (Rios and Kavuluru, 2018;Chalkidis et al, 2020a), which currently harm averaged performance in favor of improved worst case performance. Label encodings based on contextualized word representations generated by pre-trained language models (Hardalov et al, 2021) may mitigate the effect of using non-contextualized ones (e.g., Word2Vec).…”
Section: Discussionmentioning
confidence: 99%
“…For Arabic, Khouja (2020) achieved 76.7 F1 for stance detection on the ANS dataset using mBERT. Similarly, Hardalov et al (2022) applied pattern-exploiting training (PET) with sentiment pre-training in a cross-lingual setting showing sizeable improvements on 15 datasets. Alhindi et al ( 2021) showed that language-specific pre-training was pivotal, outperforming the state of the art on AraStance (52 F1) and Arabic FC (78 F1).…”
Section: Approachesmentioning
confidence: 99%
“…They further integrated label embeddings (Augenstein et al, 2018), and eventually developed an end-toend unsupervised framework for predicting stance from a set of unseen target labels. Hardalov et al (2022) explored PET (Schick and Schütze, 2021) in a cross-lingual setting, combining datasets with different label inventories by modelling the task as a cloze question answering one. They showed that MDL helps for low-resource and substantively for full-resource scenarios.…”
Section: Approachesmentioning
confidence: 99%
See 2 more Smart Citations