Proceedings of the Second Conference on Machine Translation 2017
DOI: 10.18653/v1/w17-4764
|View full text |Cite
|
Sign up to set email alerts
|

Unbabel's Participation in the WMT17 Translation Quality Estimation Shared Task

Abstract: This paper presents the contribution of the Unbabel team to the WMT 2017 Shared Task on Translation Quality Estimation. We participated on the word-level and sentence-level tracks. We describe our two submitted systems: (i) STACKEDQE, a "pure" QE system, trained only on the provided training sets, which is a stacked combination of a feature-rich sequential linear model with a neural network, and (ii) FULLSTACKEDQE, which also stacks the predictions of an automatic post-editing system, trained on additional dat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 23 publications
(26 citation statements)
references
References 6 publications
0
13
0
Order By: Relevance
“…The sentence level results of WMT 2017 are listed in Table 1. We mainly compared our single model with the two algorithms (Kim et al 2017;Martins, Kepler, and Monteiro 2017), ranking top 3 in the WMT 2017 finalist. Unbabel is combination of a feature-rich sequential linear model with a neural network.…”
Section: Sentence Level Scoring and Rankingmentioning
confidence: 99%
“…The sentence level results of WMT 2017 are listed in Table 1. We mainly compared our single model with the two algorithms (Kim et al 2017;Martins, Kepler, and Monteiro 2017), ranking top 3 in the WMT 2017 finalist. Unbabel is combination of a feature-rich sequential linear model with a neural network.…”
Section: Sentence Level Scoring and Rankingmentioning
confidence: 99%
“…2016; Scarton et al . 2016; Kim, Lee, and Na 2017; Hokamp 2017; Martins, Kepler, and Monteiro 2017).…”
Section: Related Workmentioning
confidence: 99%
“…(2017) advanced this model by applying a stack propagation method by allowing backpropagation from the quality estimator to the word predictor and achieved improvements in QE performance. Martins, Kepler, and Monteiro (2017) proposed a word-level QE system which consists of a neural model stacked into a linear feature-rich classifier. In this system, they use a number of features that depend on the context of a given target word and its aligned source word, as well as syntactic features that provide information about the dependency structure of a given sentence.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations