2006
DOI: 10.1109/tsa.2005.860770
|View full text |Cite
|
Sign up to set email alerts
|

Using Multiple Edit Distances to Automatically Grade Outputs From Machine Translation Systems

Abstract: This paper addresses the challenging problem of automatically evaluating output from machine translation (MT) systems in order to support the developers of these systems. Conventional approaches to the problem include methods that automatically assign a rank such as A, B, C, or D to MT output according to a single edit distance between this output and a correct translation example. The single edit distance can be differently designed, but changing its design makes assigning a certain rank more accurate, but an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(24 citation statements)
references
References 16 publications
0
19
0
Order By: Relevance
“…The evaluation of MT on Arabic to English and Mandarin to English ranks MT output in comparison to referenced human translation by an expert to judge which output is closer to human translation. Reference [7] in Using Multiple Edit Distances to Automatically Grade Outputs from Machine Translation Systems presented an evaluation method that is a subsystem of SSMT (Speech-to-speech MT systems. The method is "Grader based on Edit Distance" that compute the score of MT output by using a decision tree.…”
Section: Review Of Relevant Literaturementioning
confidence: 99%
“…The evaluation of MT on Arabic to English and Mandarin to English ranks MT output in comparison to referenced human translation by an expert to judge which output is closer to human translation. Reference [7] in Using Multiple Edit Distances to Automatically Grade Outputs from Machine Translation Systems presented an evaluation method that is a subsystem of SSMT (Speech-to-speech MT systems. The method is "Grader based on Edit Distance" that compute the score of MT output by using a decision tree.…”
Section: Review Of Relevant Literaturementioning
confidence: 99%
“…Another method for evaluating machine translation systems was presented by Akiba et al [11]. Their study was dedicated to evaluate machine translation (MT) systems that are subsystems of speech-to-speech MT (SSMT) systems.…”
Section: Literature Reviewmentioning
confidence: 99%
“…[Quirk 1994] also investigates the feasibility of various learning approaches for the multiclass classification problem for a very small data set in the domain of technical documentation. [Akiba et al 2001] [2] utilized DT classifiers trained on multiple edit-distance features where combinations of lexical (stem, word, part-of-speech) and semantic (thesaurus-based semantic class) matches were used to compare MT system outputs with reference translations and to approximate human scores of acceptability directly. [Kulesza and Shieber 2004] [13] trained a binary SVM classifier based on automatic scoring features in order to distinguish between "human-produced" and "machinegenerated" translations of newswire data instead of predicting human judgments directly.…”
Section: Prediction Of Human Assessmentsmentioning
confidence: 99%