Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.603
|View full text |Cite
|
Sign up to set email alerts
|

Neural Unsupervised Domain Adaptation in NLP—A Survey

Abstract: Deep neural networks excel at learning from labeled data and achieve state-of-the-art results on a wide array of Natural Language Processing tasks. In contrast, learning from unlabeled data, especially under domain shift, remains a challenge. Motivated by the latest advances, in this survey we review neural unsupervised domain adaptation techniques which do not require labeled target domain data. This is a more challenging yet a more widely applicable setup. We outline methods, from early traditional non-neura… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
108
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 137 publications
(113 citation statements)
references
References 99 publications
1
108
0
Order By: Relevance
“…Domain adversarial training is a dominant approach for UDA (Ramponi and Plank, 2020), in-1 https://github.com/ckarouzos/slp_daptmlm spired by the theory for learning from different domains introduced in Ben- David et al (2007David et al ( , 2010. Ganin et al (2016); Ganin and Lempitsky (2015) propose to learn a task while not being able to distinguish if samples come from the source or the target distribution, through use of an adversarial cost.…”
Section: Related Workmentioning
confidence: 99%
“…Domain adversarial training is a dominant approach for UDA (Ramponi and Plank, 2020), in-1 https://github.com/ckarouzos/slp_daptmlm spired by the theory for learning from different domains introduced in Ben- David et al (2007David et al ( , 2010. Ganin et al (2016); Ganin and Lempitsky (2015) propose to learn a task while not being able to distinguish if samples come from the source or the target distribution, through use of an adversarial cost.…”
Section: Related Workmentioning
confidence: 99%
“…It is more practical yet more challenging. For a complete treatment of neural networks and UDA in NLP, refer to (Ramponi and Plank, 2020). Also, we do not treat multilingual work.…”
Section: Applications Of Divergence Measuresmentioning
confidence: 99%
“…Unsupervised domain adaptation. In the past few years, there has been considerable interest in unsupervised domain adaptation for cross-domain NLP tasks, including cross-domain sentiment analysis (Ramponi and Plank, 2020). Previous work has focused on minimizing the discrepancy between domains by aligning the output distributions of the source and the target domains.…”
Section: Related Workmentioning
confidence: 99%
“…Maximum Mean Discrepancy (MMD) (Gretton et al, 2012), KL-divergence (Zhuang et al, 2015), Correlation Alignment (CORAL) (Sun and Saenko, 2016), and domain-adversarial learning (Ganin et al, 2016) are among the most widely used methods to learn domain-invariant features. In the same vein, other researchers have adopted self-training approach in order to learn discriminative features of the target domain (Ramponi and Plank, 2020;. The latter approach enables the model to be also trained on some samples of the target domain.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation