2019
DOI: 10.1609/aaai.v33i01.33016367
|View full text |Cite
|
Sign up to set email alerts
|

“Bilingual Expert” Can Find Translation Errors

Abstract: The performances of machine translation (MT) systems are usually evaluated by the metric BLEU when the golden references are provided. However, in the case of model inference or production deployment, golden references are usually expensively available, such as human annotation with bilingual expertise. In order to address the issue of translation quality estimation (QE) without reference, we propose a general framework for automatic evaluation of the translation output for the QE task in the Conference on Sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 42 publications
(34 citation statements)
references
References 2 publications
0
34
0
Order By: Relevance
“…These features will be fed into a quality estimator to estimate the translation quality. The Bilingual Expert model uses self-attention mechanism and transformer neural networks to construct a bidirectional transformer architecture (Fan et al, 2018), serving as a conditional language model. It is used to predict every single word in the target sentence given the entire source sentence and its context .…”
Section: Qe Brain Baseline Modelmentioning
confidence: 99%
See 4 more Smart Citations
“…These features will be fed into a quality estimator to estimate the translation quality. The Bilingual Expert model uses self-attention mechanism and transformer neural networks to construct a bidirectional transformer architecture (Fan et al, 2018), serving as a conditional language model. It is used to predict every single word in the target sentence given the entire source sentence and its context .…”
Section: Qe Brain Baseline Modelmentioning
confidence: 99%
“…We can also use the Byte-Pair Encoding (BPE) tokenization as a substitution for a word tokenization in text pre-processing. Fan et al (2018) compared the performance of the word and BPE tokenization on both sentence and word levels in WMT 18 and the results show that the models with BPE tokenization can produce comparable or better results than those with word tokenization.…”
Section: Greedy Ensemble Selectionmentioning
confidence: 99%
See 3 more Smart Citations