Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.503
|View full text |Cite
|
Sign up to set email alerts
|

On the Benefit of Syntactic Supervision for Cross-lingual Transfer in Semantic Role Labeling

Abstract: Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervised SRL training data are not abundant. Cross-lingual learning can improve performance in this setting by transferring knowledge from high-resource languages to low-resource ones. Moreover, we hypothesize that annotations of syntactic dependencies can be leveraged t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…Additionally, we found that models trained and tested on imbalanced datasets are extremely vulnerable to mislabeled instances because incorrect signals can easily dominate training or evaluation, leading to poor performance (see Supplementary Note 2 ). Possible approaches to combat this problem include using methods that allow for naturally updating the labels, such as active learning [36] and semi-supervised learning [37], or building a unified model for all tissue or disease classification tasks to borrow information from related tasks.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, we found that models trained and tested on imbalanced datasets are extremely vulnerable to mislabeled instances because incorrect signals can easily dominate training or evaluation, leading to poor performance (see Supplementary Note 2 ). Possible approaches to combat this problem include using methods that allow for naturally updating the labels, such as active learning [36] and semi-supervised learning [37], or building a unified model for all tissue or disease classification tasks to borrow information from related tasks.…”
Section: Discussionmentioning
confidence: 99%
“…Multi-source AL for NLP While AL has been studied for a variety of tasks in NLP (Siddhant and Lipton, 2018;Lowell et al, 2019;Ein-Dor et al, 2020;Shelmanov et al, 2021;Margatina et al, 2021;Yuan et al, 2022;Schröder et al, 2022;Margatina et al, 2022;Kirk et al, 2022;Zhang et al, 2022), the majority of work remains limited to settings where training data is assumed to stem from a single source. Some recent works have sought to address the issues that arise when relaxing the single-source assumption (Ghorbani et al, 2021;, though results remain primarily limited to image classification.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, active learning (AL) (Cohn et al, 1996) has emerged as a promising avenue for dataefficient supervised learning (Zhang et al, 2022). AL has been successfully applied to a variety of NLP tasks, such as text classification (Zhang et al, 2016;Siddhant and Lipton, 2018;Prabhu et al, 2019;Ein-Dor et al, 2020;Margatina et al, 2022), entity recognition (Shen et al, 2017;Siddhant and Lipton, 2018;Lowell et al, 2019), part-of-speech tagging (Chaudhary et al, 2021) and neural machine translation (Peris and Casacuberta, 2018;Liu et al, 2018;Zhao et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…To summarize DAL methodologies, recent efforts have focused on specific tasks such as text classification [15] and image analysis [16], [17], specific domains such as natural language processing (NLP) [18] and computer vision (CV) [19], [20], or reproducing mainstream baselines [21], [22]. As for most early survey work, one common inadequacy is that they may not have enough discussion of recent advances [23], [24], [25], or lack summarization of emerging learning paradigms (contrastive learning and so on) and challenges [26], [27], especially in light of rapidly developing deep learning techniques (e.g., fine-tuning on pretrained models).…”
Section: Introductionmentioning
confidence: 99%