2019
DOI: 10.48550/arxiv.1906.07286
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generalizing Back-Translation in Neural Machine Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Various studies have investigated back-translation with the aim to improve the backward model, to select the most suitable generation/decoding methods and to reduce the impact of the ratio of the synthetic to real bitext. Various studies have found that the quality of the models trained using back-translation depends on the quality of the backward model [5], [9], [11], [15], [25], [27], [28]. To improve the quality of the synthetic parallel data, [9] used iterative back-translationiteratively using the back-translated data to improve the backward model, Kocmi and Bojar [28] and Dabre et al [29] used high resource languages through transfer learning and Zhang et al [30] explored the use of both target and source monolingual data to improve both the backward and forward models.…”
Section: B Leveraging Monolingual Data For Nmtmentioning
confidence: 99%
See 1 more Smart Citation
“…Various studies have investigated back-translation with the aim to improve the backward model, to select the most suitable generation/decoding methods and to reduce the impact of the ratio of the synthetic to real bitext. Various studies have found that the quality of the models trained using back-translation depends on the quality of the backward model [5], [9], [11], [15], [25], [27], [28]. To improve the quality of the synthetic parallel data, [9] used iterative back-translationiteratively using the back-translated data to improve the backward model, Kocmi and Bojar [28] and Dabre et al [29] used high resource languages through transfer learning and Zhang et al [30] explored the use of both target and source monolingual data to improve both the backward and forward models.…”
Section: B Leveraging Monolingual Data For Nmtmentioning
confidence: 99%
“…Recently, researchers have proposed methods to exploit the easier-to-get monolingual data of one or both of the languages to augment the parallel data and improve the performance of the translation models. Such methods include integrating a language model [8], back-translation [9]- [11], forward translation [12] and dual learning [13].…”
Section: Introductionmentioning
confidence: 99%
“…To generate a better synthetic sentence for the BT method, various generation methods have been proposed such as greedy, beam search, stochastic sampling, filtered sampling (Cheng, 2019;He et al, 2016;Imamura et al, 2018;Edunov et al, 2018;Fadaee and Monz, 2018;Graça et al, 2019;Caswell et al, 2019;Wu et al, 2019). Among them, stochastic sampling or filtered sampling demonstrated promising performances.…”
Section: Introductionmentioning
confidence: 99%