Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1020
|View full text |Cite
|
Sign up to set email alerts
|

Effective Adversarial Regularization for Neural Machine Translation

Abstract: A regularization technique based on adversarial perturbation, which was initially developed in the field of image processing, has been successfully applied to text classification tasks and has yielded attractive improvements. We aim to further leverage this promising methodology into more sophisticated and critical neural models in the natural language processing field, i.e., neural machine translation (NMT) models. However, it is not trivial to apply this methodology to such models. Thus, this paper investiga… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
52
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(54 citation statements)
references
References 13 publications
1
52
0
Order By: Relevance
“…Some authors further applied the adversarial training to various NLP tasks, such relation extraction (Wu et al, 2017) , part-of-speech tagging (Yasunaga et al, 2018) and jointly extracting entities and relations (Bekoulis et al, 2018). A recent work (Sato et al, 2019) investigates the effects of AT on neural machine translation. studied the effects of applying AT to different set of variables in MRC tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Some authors further applied the adversarial training to various NLP tasks, such relation extraction (Wu et al, 2017) , part-of-speech tagging (Yasunaga et al, 2018) and jointly extracting entities and relations (Bekoulis et al, 2018). A recent work (Sato et al, 2019) investigates the effects of AT on neural machine translation. studied the effects of applying AT to different set of variables in MRC tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Due to its formidable search space, this paradigm simply perturbs on a small ratio of token positions and greedy search by brute force among candidates. Note that adversarial example generation is fundamentally different from noised hidden representation in adversarial training (Cheng et al, 2019;Sano et al, 2019), which is not to be concerned in this work.…”
Section: Adversarial Examples In Nlpmentioning
confidence: 99%
“…2019; Motoki Sato, 2019). However, we train the model with only the adversarial samples for the sake of fair comparison with the baselines.…”
Section: Problem Definitionmentioning
confidence: 99%
“…Our method extends the adversarial training framework, which was initially developed in the vision domain (Goodfellow et al, 2014) and has begun to be adopted in the NLP domain recently (Jia and Liang, 2017;Belinkov and Bisk, 2018;Samanta and Mehta, 2017;Miyato et al, 2016;Motoki Sato, 2019;Wang et al, 2019a;Cheng et al, 2019). Miyato et al (2016) adopted the adversarial training framework on text classification by perturbing embedding space with continuous adversarial noise.…”
Section: Related Workmentioning
confidence: 99%