2019
DOI: 10.48550/arxiv.1911.02215
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…UniLM (Dong et al 2019;Bao et al 2020) pre-trains encoder based model with three tasks: unidirectional, bidirectional, and prediction, which allow it can be fine-tuned for both natural language understanding and generation tasks. For the encoder-decoder based models (Song et al 2019;Qi et al 2020b;Song et al 2019), these models are pre-trained with sequence-to-sequence tasks to help the down-stream generation tasks.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…UniLM (Dong et al 2019;Bao et al 2020) pre-trains encoder based model with three tasks: unidirectional, bidirectional, and prediction, which allow it can be fine-tuned for both natural language understanding and generation tasks. For the encoder-decoder based models (Song et al 2019;Qi et al 2020b;Song et al 2019), these models are pre-trained with sequence-to-sequence tasks to help the down-stream generation tasks.…”
Section: Related Workmentioning
confidence: 99%
“…This also alleviates the multimodality problem (Gu et al 2017) in non-autoregeressive generation. Many non-autoregeressive translation methods are proposed for better alignment, like fertiity (Gu et al 2017), SoftCopy (Wei et al 2019) or adding reordering module (Ran et al 2019). However, the source and target words in monolingual generation tasks cannot be aligned directly like translation.…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Based on variational inference, Ma et al (2019) proposed FlowSeq to model sequence-to-sequence generation using generative flow, and Shu et al (2020) introduced LaNMT with continuous latent variables and deterministic inference. Bao et al (2019); Ran et al (2019) used the position information as latent variables to explicitly model the reordering information in the decoding procedure.…”
Section: Non-autoregressive Translationmentioning
confidence: 99%