2017
DOI: 10.18178/ijlll.2017.3.1.100
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the Translation of Google and Microsoft Bing in Translating Political Texts from Arabic into English

Abstract: Abstract-Online machine translation (OMT) systems are widely used throughout the world freely or at low cost. Most of these systems use statistical machine translation (SMT) that is based on a corpus full with translation examples to learn from them how to translate correctly. Online automatic machine translation systems differ widely in their effectiveness and accuracy. Therefore, the wide spread of such translation platforms make it necessary to evaluate the output in order to shed light on the capacity and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 8 publications
(6 reference statements)
1
7
0
Order By: Relevance
“…Almahasees [24] compared the two most popular machine translation systems, Google Translate and the Microsoft Bing translator. Both systems used statistical machine translation.…”
Section: Related Workmentioning
confidence: 99%
“…Almahasees [24] compared the two most popular machine translation systems, Google Translate and the Microsoft Bing translator. Both systems used statistical machine translation.…”
Section: Related Workmentioning
confidence: 99%
“…Almahasees, 2020). In most cases, translation quality looks for output clarity, adequacy, and fluency as prerequisites to determine output acceptability (Z. M. Almahasees, 2017). Translation quality requires comprehension to determine various kinds of translation equivalence and identify translation errors (Chan, 2014).…”
Section: Machine Translation Evaluation (Mte)mentioning
confidence: 99%
“…MT could be evaluated manually and automatically [29], [30]. The author in [31] states manual evaluation investigates the systems' usability via human participants by means of Error Analysis […], whereas automatic evaluation examines MT outputs through the text's similarity to a referenced translation.…”
Section: Machine Translation Evaluationmentioning
confidence: 99%