Abstract:Abstract-This study aims to compare the effectiveness of two popular machine translation systems (Google Translate and Babylon machine translation system) used to translate English sentences into Arabic relative to the effectiveness of English to Arabic human translation. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method, which was adopted and implemented to achieve the main goal of this study. BLEU method is bas… Show more
“…BLEU is highly constructed on an essential notion for determining the goodness of a particular MT programme. It could be made briefly by the proximity of the proposed outcome of the MT scheme with indication to a translated text done by an (experienced human) translation of the text itself [8].…”
Evaluation is important part of our system development cycle; it also contributes to improving new machine translation (MT) technology optimum via comparing them with the traditional systems available to determine the weaknesses and the effectiveness to be improved in the proposed MT system. This work aiming to make a study that evaluate the performance and effectivness of the domain sulfur industry (DSI) for English-Arabic DIA translator quality. The recent study has conducted evaluating by making a comparison between this programme with the prominent Google translator through applying a rendering of 1,200 English sentences in bilingual evaluation understudy (BLUE) method. The obtain results show that the efficiency of Google translator is about 30.325%, while DIA translator efficiency in domain sulfur industry is about 73.325% and it’s more effective and give a better translation accuracy. The BLUE method efficiency is about (90.478%) compared with the human expert evaluator.
“…BLEU is highly constructed on an essential notion for determining the goodness of a particular MT programme. It could be made briefly by the proximity of the proposed outcome of the MT scheme with indication to a translated text done by an (experienced human) translation of the text itself [8].…”
Evaluation is important part of our system development cycle; it also contributes to improving new machine translation (MT) technology optimum via comparing them with the traditional systems available to determine the weaknesses and the effectiveness to be improved in the proposed MT system. This work aiming to make a study that evaluate the performance and effectivness of the domain sulfur industry (DSI) for English-Arabic DIA translator quality. The recent study has conducted evaluating by making a comparison between this programme with the prominent Google translator through applying a rendering of 1,200 English sentences in bilingual evaluation understudy (BLUE) method. The obtain results show that the efficiency of Google translator is about 30.325%, while DIA translator efficiency in domain sulfur industry is about 73.325% and it’s more effective and give a better translation accuracy. The BLUE method efficiency is about (90.478%) compared with the human expert evaluator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.