2021
DOI: 10.1080/03772063.2021.1962745
|View full text |Cite
|
Sign up to set email alerts
|

AdaBLEU: A Modified BLEU Score for Morphologically Rich Languages

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…However, there are several fully Automatic Machine Translation Evaluation (AMTE) metrics. They can be classified into five categories [4]: lexical [31,23], character [30], semantic [18,24], syntactic [3,13,19,5], and semantic-syntactic metrics [7].…”
Section: State Of the Artmentioning
confidence: 99%
“…However, there are several fully Automatic Machine Translation Evaluation (AMTE) metrics. They can be classified into five categories [4]: lexical [31,23], character [30], semantic [18,24], syntactic [3,13,19,5], and semantic-syntactic metrics [7].…”
Section: State Of the Artmentioning
confidence: 99%
“…The accuracy of the module has been measured using bilingual evaluation understudy (BLEU) [23] score. It is a technique to compare machine-translated sentences to a collection of reference sentences.…”
Section: Bidirectional Translatormentioning
confidence: 99%
“…The BLEU metric is statistically based and is suitable for every language. Yet, it does not consider lexical relations of words (Chauhan et al, 2021), which is an important issue for highly inflective languages such as Lithuanian. Since BLEU scores have been proven to correlate less well with human evaluations in such languages, researchers and developers have proposed alternative or modified evaluation scores, e.g.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Since BLEU scores have been proven to correlate less well with human evaluations in such languages, researchers and developers have proposed alternative or modified evaluation scores, e.g. AdaBLEU, which takes into account lexical and syntactical properties of morphologically rich languages (Chauhan et al, 2021). Besides, the reliability of the metrics with low-resource languages has not been confirmed either (Kocmi et al, 2021).…”
Section: Literature Reviewmentioning
confidence: 99%