Proceedings of the Second Conference on Machine Translation 2017
DOI: 10.18653/v1/w17-4757
|View full text |Cite
|
Sign up to set email alerts
|

Results of the WMT17 Neural MT Training Task

Abstract: This paper presents the results of the WMT17 Neural MT Training Task. The objective of this task is to explore the methods of training a fixed neural architecture, aiming primarily at the best translation quality and, as a secondary goal, shorter training time. Task participants were provided with a complete neural machine translation system, fixed training data and the configuration of the network. The translation was performed in the English-to-Czech direction and the task was divided into two subtasks of di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 20 publications
(23 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…We use the same model parameters as defined for the WMT 2017 NMT Training Task (Bojar et al, 2017). The task defines models of two sizes, one that fits a 4GB GPU and one that fits an 8GB GPU.…”
Section: Model Detailsmentioning
confidence: 99%
“…We use the same model parameters as defined for the WMT 2017 NMT Training Task (Bojar et al, 2017). The task defines models of two sizes, one that fits a 4GB GPU and one that fits an 8GB GPU.…”
Section: Model Detailsmentioning
confidence: 99%
“…Training and development corpora were used from the WMT 2017 shared tasks 11 (Bojar et al, 2017a). Neural Monkey (Helcl and Libovický, 2017), was used to train the NMT systems with configuration provided by the WMT Neural MT Training Task 12 .…”
Section: Finding Correlation Between Neural Network Attention and Outmentioning
confidence: 99%
“…Machine learning offers a systematic approach to integrate the scores of stand-alone metrics. In the MT evaluation, various successful learning paradigms have been proposed (Bojar et al, 2016), (Bojar et al, 2017) and the existing learning-based metrics can be categorized as binary functions-"which classify the candidate translation as good or bad" (Kulesza and Shieber, 2004), (Guzmán et al, 2015) or continuous functions-"which score the quality of translation on an absolute scale" (Song and Cohn, 2011), (Albrecht and Hwa, 2008). Our research is conceptually similar to the work in (Kulesza and Shieber, 2004), which induces a "human-likeness" criteria.…”
Section: Literature Reviewmentioning
confidence: 99%