Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 1 2017
DOI: 10.18653/v1/e17-1099
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Translate in Real-time with Neural Machine Translation

Abstract: Translating in real-time, a.k.a. simultaneous translation, outputs translation words before the input sentence ends, which is a challenging problem for conventional machine translation methods. We propose a neural machine translation (NMT) framework for simultaneous translation in which an agent learns to make decisions on when to translate from the interaction with a pre-trained NMT environment. To trade off quality and delay, we extensively explore various targets for delay and design a method for beam-searc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
243
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 158 publications
(243 citation statements)
references
References 14 publications
0
243
0
Order By: Relevance
“…With this refined g, we can make several latency metrics content-aware, including average proportion (Cho and Esipova, 2016), consecutive wait (Gu et al, 2017), average lagging , and differentiable average lagging (Arivazhagan et al, 2019b). We opt for differentiable average lagging (DAL) because of its interpretability and because it sidesteps some problems with average lagging (Cherry and Foster, 2019).…”
Section: Latencymentioning
confidence: 99%
See 3 more Smart Citations
“…With this refined g, we can make several latency metrics content-aware, including average proportion (Cho and Esipova, 2016), consecutive wait (Gu et al, 2017), average lagging , and differentiable average lagging (Arivazhagan et al, 2019b). We opt for differentiable average lagging (DAL) because of its interpretability and because it sidesteps some problems with average lagging (Cherry and Foster, 2019).…”
Section: Latencymentioning
confidence: 99%
“…Furthermore, retranslation's latency-quality trade-off can be manipulated without retraining the base system. It is not the only solution to have these properties; most policies that are not trained jointly with NMT can make the same claims (Cho and Esipova, 2016;Gu et al, 2017;. We conduct an experiment to demonstrate the value of this flexibility, by comparing our Base system to the upgraded Bidi+Beam.…”
Section: Extendability Of Re-translationmentioning
confidence: 99%
See 2 more Smart Citations
“…A number of methods have been proposed to solve this problem and the introduction of neuralnetwork-based MT systems showed a great improvement in simultaneous translation and accuracy of standard (Gu et al 2017). The NMT was proposed by (Kalchbrenner & Blunsom, 2013), (Stuskever et al, 2014) and (Cho et al, 2014a).…”
Section: Neural Machine Translation Systemmentioning
confidence: 99%