Proceedings of the Tenth Workshop on Statistical Machine Translation 2015
DOI: 10.18653/v1/w15-3037
|View full text |Cite
|
Sign up to set email alerts
|

QUality Estimation from ScraTCH (QUETCH): Deep Learning for Word-level Translation Quality Estimation

Abstract: This paper describes the system submitted by the University of Heidelberg to the Shared Task on Word-level Quality Estimation at the 2015 Workshop on Statistical Machine Translation. The submitted system combines a continuous space deep neural network, that learns a bilingual feature representation from scratch, with a linear combination of the manually defined baseline features provided by the task organizers. A combination of these orthogonal information sources shows significant improvements over the combin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
45
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 60 publications
(52 citation statements)
references
References 15 publications
1
45
1
Order By: Relevance
“…In addition to that, six new features were included which contain combinations of other features, and which proved useful in (Kreutzer et al, 2015;Martins et al, 2016):…”
Section: Baseline Systemsmentioning
confidence: 99%
“…In addition to that, six new features were included which contain combinations of other features, and which proved useful in (Kreutzer et al, 2015;Martins et al, 2016):…”
Section: Baseline Systemsmentioning
confidence: 99%
“…The system receives as input the source and target sentences s and t, their word-level alignments A, and their corresponding POS tags obtained from TurboTagger. The input layer follows a similar architecture as QUETCH (Kreutzer et al, 2015), with the addition of POS features. A vector representing each target word is obtained by concatenating the embedding of that word with those of the aligned word in the source.…”
Section: Neural Systemmentioning
confidence: 99%
“…If there are multiple aligned source words, they are concatenated into a single feature. Kreutzer et al (2015).…”
Section: Neural Systemmentioning
confidence: 99%
“…In recent years, NNs have also shown promising results for sentence and word-level QE in different languages and domains (Kreutzer et al, 2015;Patel and Sasikumar, 2016). Moreover, focusing on the detection of grammatical errors from a different perspective, Liu and Liu (2016) proposed to derive positive and negative samples from unlabelled data, by generating grammatical errors artificially, and showed that RNNs outperform Support Vector Machines (SVMs) in judging the grammaticality of each word.…”
Section: Related Workmentioning
confidence: 99%