The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2013
DOI: 10.14569/ijacsa.2013.040109
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating English to Arabic Machine Translation Using BLEU

Abstract: Abstract-This study aims to compare the effectiveness of two popular machine translation systems (Google Translate and Babylon machine translation system) used to translate English sentences into Arabic relative to the effectiveness of English to Arabic human translation. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method, which was adopted and implemented to achieve the main goal of this study. BLEU method is bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 20 publications
(20 reference statements)
0
1
0
Order By: Relevance
“…BLEU is highly constructed on an essential notion for determining the goodness of a particular MT programme. It could be made briefly by the proximity of the proposed outcome of the MT scheme with indication to a translated text done by an (experienced human) translation of the text itself [8].…”
mentioning
confidence: 99%
“…BLEU is highly constructed on an essential notion for determining the goodness of a particular MT programme. It could be made briefly by the proximity of the proposed outcome of the MT scheme with indication to a translated text done by an (experienced human) translation of the text itself [8].…”
mentioning
confidence: 99%