Proceedings of the Third Conference on Machine Translation: Shared Task Papers 2018
DOI: 10.18653/v1/w18-6423
|View full text |Cite
|
Sign up to set email alerts
|

Tilde’s Machine Translation Systems for WMT 2018

Abstract: The paper describes the development process of the Tilde's NMT systems that were submitted for the WMT 2018 shared task on news translation. We describe the data filtering and pre-processing workflows, the NMT system training architectures, and automatic evaluation results. For the WMT 2018 shared task, we submitted seven systems (both constrained and unconstrained) for English-Estonian and Estonian-English translation directions. The submitted systems were trained using Transformer models.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 19 publications
(17 reference statements)
0
6
0
Order By: Relevance
“…An additional system paper (Hu et al, 2018) describes a non-primary submission. (Pinnis et al, 2018) TILDE submitted four systems: TILDE-C-NMT, TILDE-C-NMT-COMB, TILDE-C-NMT-2BT and TILDE-NC-NMT. TILDE-C-NMT are constrained English-Estonian and Estonian-English NMT systems that were deployed as ensembles of averaged factored data Transformer models.…”
Section: Tencent (Wang Et Al 2018a)mentioning
confidence: 99%
“…An additional system paper (Hu et al, 2018) describes a non-primary submission. (Pinnis et al, 2018) TILDE submitted four systems: TILDE-C-NMT, TILDE-C-NMT-COMB, TILDE-C-NMT-2BT and TILDE-NC-NMT. TILDE-C-NMT are constrained English-Estonian and Estonian-English NMT systems that were deployed as ensembles of averaged factored data Transformer models.…”
Section: Tencent (Wang Et Al 2018a)mentioning
confidence: 99%
“…This year, we did not change the parallel and monolingual data pre-processing workflows that we used for our WMT 2018 submissions (Pinnis et al, 2018a).…”
Section: Data Pre-processingmentioning
confidence: 99%
“…In our submissions for WMT 2018, we introduced an automatic named entity (NE) postediting (ANEPE) workflow (Pinnis et al, 2018a), which allowed to fix translations of NEs (consisting of one word) and non-translatable words after NMT decoding. The method depends on the quality of word alignments.…”
Section: Automatic Named Entity Post-editingmentioning
confidence: 99%
See 1 more Smart Citation
“…Aside from visualising and interpreting NMT output, attention alignments are also used to get hard word alignments in order to correctly translate structured documents and reconstruct the structure after translating (Pinnis et al, 2018b). To achieve similar results with transformer-based NMT models, several approaches have been explored, such as learning guided alignments 5 , averaging attention matrices 6 and using fast align (Dyer et al, 2013) to generate alignments after the translation has been produced (e.g., Pinnis et al (2018a)). The latter approach is of no use for interpreting NMT output as it uses a separate model and only attempts to guess what the alignments are after the result has been produced.…”
Section: Transformer Modelsmentioning
confidence: 99%