Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.409
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Lingual Event Detection via Optimized Adversarial Training

Abstract: In this work, we focus on Cross-Lingual Event Detection where a model is trained on data from a source language but its performance is evaluated on data from a second, target, language. Most recent works in this area have harnessed the language-invariant qualities displayed by pre-trained Multi-lingual Language Models. Their performance, however, reveals there is room for improvement as the crosslingual setting entails particular challenges. We employ Adversarial Language Adaptation to train a Language Discrim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 20 publications
(33 reference statements)
0
2
0
Order By: Relevance
“…Base Model: Following prior work (Guzman-Nateras et al, 2022), we formulate ED as a sequence labeling problem to facilitate cross-lingual transfer learning (CLTL). Given an input sentence of n tokens W = {w 1 , w 2 , .…”
Section: Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…Base Model: Following prior work (Guzman-Nateras et al, 2022), we formulate ED as a sequence labeling problem to facilitate cross-lingual transfer learning (CLTL). Given an input sentence of n tokens W = {w 1 , w 2 , .…”
Section: Modelmentioning
confidence: 99%
“…We utilize the XLM-R base model with 768 dimensions in the hidden vectors to be comparable with previous work Guzman-Nateras et al, 2022). We tune the hyperparameters for our model over the development data using the EN→ZH language pair.…”
Section: Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike such prior work, we introduce a new word-label alignment perspective using OT for ED. Finally, some recent work has utilized OT for character/word/example alignment problems (Dou and Neubig, 2021;Xu et al, 2021;Veyseh et al, 2021aVeyseh et al, , 2022Guzman-Nateras et al, 2022). However, none of them explores OT for word-label alignment in ED.…”
Section: Related Workmentioning
confidence: 99%
“…Unlike such prior work, we introduce a new word-label alignment perspective using OT for ED. Finally, some recent work has utilized OT for character/word/example alignment problems (Dou and Neubig, 2021;Veyseh et al, 2021aVeyseh et al, , 2022Guzman-Nateras et al, 2022). However, none of them explores OT for word-label alignment in ED.…”
Section: Related Workmentioning
confidence: 99%