Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers 2016
DOI: 10.18653/v1/w16-2361
|View full text |Cite
|
Sign up to set email alerts
|

CUNI System for WMT16 Automatic Post-Editing and Multimodal Translation Tasks

Abstract: Neural sequence to sequence learning recently became a very promising paradigm in machine translation, achieving competitive results with statistical phrase-based systems. In this system description paper, we attempt to utilize several recently published methods used for neural sequential learning in order to build systems for WMT 2016 shared tasks of Automatic Post-Editing and Multimodal Machine Translation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 50 publications
(65 citation statements)
references
References 22 publications
0
64
0
Order By: Relevance
“…Such works, however, exploit the idea of a "joint representation" of the input mainly in the statistical phrase-based APE framework while, within the neural paradigm, recent prior work mostly focuses on single-source systems (Pal et al, 2016a;Junczys-Dowmunt and Grundkiewicz, 2016;Pal et al, 2017). The only exception, to the best of our knowledge, is the approach of Libovický et al (2016), who developed a multi-source neural APE system. According to the authors, however, the resulting network seems to be inadequate to learn how to perform the minimum edits required to correct the MT segment.…”
Section: Neural Machine Translationmentioning
confidence: 99%
See 2 more Smart Citations
“…Such works, however, exploit the idea of a "joint representation" of the input mainly in the statistical phrase-based APE framework while, within the neural paradigm, recent prior work mostly focuses on single-source systems (Pal et al, 2016a;Junczys-Dowmunt and Grundkiewicz, 2016;Pal et al, 2017). The only exception, to the best of our knowledge, is the approach of Libovický et al (2016), who developed a multi-source neural APE system. According to the authors, however, the resulting network seems to be inadequate to learn how to perform the minimum edits required to correct the MT segment.…”
Section: Neural Machine Translationmentioning
confidence: 99%
“…Our multi-source APE implementation, which is built on top of the network architecture discussed in §2, is similar to (Libovický et al, 2016) but extends it with a context dropout, and considers the target as a sequence of words rather than minimum-length sequence. We extend the architecture to have two encoders, one for src and another for mt.…”
Section: Neural Machine Translationmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, inspired by Libovický et al (2016), we also trained a separate model that generates a sequence of post-editing operations ("editops") instead of directly generating the target sequence of characters. Aside from generating characters present in the training data, the model learns to use special tokens "<keep>" and "<delete>", or to normally produce characters present in the training data, to indicate the modifications needed for the MT output.…”
Section: Predicting Edit Operationsmentioning
confidence: 99%
“…Multi-modal MT has just recently been addressed by the MT community in a shared task , where many different groups proposed techniques for multi-modal translation using different combinations of NMT and SMT models (Caglayan et al, 2016;Calixto et al, 2016;Huang et al, 2016;Libovický et al, 2016;Shah et al, 2016). In the multimodal translation task, participants are asked to train models to translate image descriptions from one natural language into another, while also taking the image itself into consideration.…”
Section: Related Workmentioning
confidence: 99%