Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.373
|View full text |Cite
|
Sign up to set email alerts
|

Switch Point biased Self-Training: Re-purposing Pretrained Models for Code-Switching

Abstract: Code-switching (CS), a ubiquitous phenomenon due to the ease of communication it offers in multilingual communities still remains an understudied problem in language processing. The primary reasons behind this are: (1) minimal efforts in leveraging large pretrained multilingual models, and (2) the lack of annotated data. The distinguishing case of low performance of multilingual models in CS is the intra-sentence mixing of languages leading to switch points. We first benchmark two sequence labeling tasks -POS … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…Techniques for adapting to linguistic variants and mixed-language data include adversarial learning to pick up on key linguistic cues (Kumar et al, 2021), augmenting datasets with synthetic text (Winata et al, 2019) or examples of variants that models underperform on 2 Our prompts are data-dependent and fixed, and thus rather unrelated to the prompt tuning literature . (Chopra et al, 2021), discriminative learning (Gonen and Goldberg, 2019), and transfer learning with morphological cues (Aguilar and Solorio, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Techniques for adapting to linguistic variants and mixed-language data include adversarial learning to pick up on key linguistic cues (Kumar et al, 2021), augmenting datasets with synthetic text (Winata et al, 2019) or examples of variants that models underperform on 2 Our prompts are data-dependent and fixed, and thus rather unrelated to the prompt tuning literature . (Chopra et al, 2021), discriminative learning (Gonen and Goldberg, 2019), and transfer learning with morphological cues (Aguilar and Solorio, 2020).…”
Section: Related Workmentioning
confidence: 99%