Proceedings of the Second Conference on Machine Translation 2017
DOI: 10.18653/v1/w17-4717
|View full text |Cite
|
Sign up to set email alerts
|

Findings of the 2017 Conference on Machine Translation (WMT17)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
263
1
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 292 publications
(268 citation statements)
references
References 61 publications
2
263
1
2
Order By: Relevance
“…Next, we discuss the compilation of German-English and English-German corpora. We select these pairs, as they are among the most studied in MT, and comparatively high results are obtained for them (Bojar et al, 2017). Hence, they are more likely to benefit from a fine-grained analysis.…”
Section: A Test Case On Extracting Setsmentioning
confidence: 99%
“…Next, we discuss the compilation of German-English and English-German corpora. We select these pairs, as they are among the most studied in MT, and comparatively high results are obtained for them (Bojar et al, 2017). Hence, they are more likely to benefit from a fine-grained analysis.…”
Section: A Test Case On Extracting Setsmentioning
confidence: 99%
“…We use English translations of the Chinese source texts in the WMT 2017 English-Chinese test set (Bojar et al, 2017) for all experiments presented in this article:…”
Section: Translationsmentioning
confidence: 99%
“…Mean standardized scores for translation task participating systems were computed by firstly taking the average of scores for individual translations in the test set (since some were assessed more than once), before combining all scores for translations attributed to a given MT system into its overall adequacy score. The gold standard for system-level DA evaluation is thus what is denoted "Ave z" in Findings 2017 (Bojar et al, 2017a).…”
Section: System-level Manual Quality Judgmentsmentioning
confidence: 99%
“…The metrics task itself then needs manual judgements of translation quality in order to check the extent to which the automatic metrics can approximate the judgement. For situations where the reference translation is not available, please consult the results of Quality Estimation Task (Bojar et al, 2017a).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation