2020
DOI: 10.20944/preprints202012.0580.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparing Statistical and Neural Machine Translation Performance on Hindi-to-Tamil and English-to-Tamil

Abstract: Statistical machine translation (SMT) which was the dominant paradigm in machine translation (MT) research for nearly three decades has recently been superseded by the end-to-end deep learning approaches to MT. Although deep neural models produce state-of-the-art results in many translation tasks, they are found to under-perform on resource-poor scenarios. Despite some success, none of the present-day benchmarks that have tried to overcome this problem can be regarded as a universal solution to the problem of … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 24 publications
(34 reference statements)
1
1
0
Order By: Relevance
“…They report that techniques such as domain adaptation and back-translation can make training NMT systems on low-resource languages possible. Similar findings was also reported by Ramesh et al (2020) for Tamil and Dandapat and Federmann (2018) for Telugu .…”
Section: Neural Machine Translationsupporting
confidence: 90%
“…They report that techniques such as domain adaptation and back-translation can make training NMT systems on low-resource languages possible. Similar findings was also reported by Ramesh et al (2020) for Tamil and Dandapat and Federmann (2018) for Telugu .…”
Section: Neural Machine Translationsupporting
confidence: 90%
“…A low BLEU score indicates differences in n-grams between the machine translation and the reference translation (i.e., human translation). Similarly, (Ramesh et al, 2020) achieved a notably low BLEU score for English-to-Tamil translation. They argued that the nature of the language contributes to the increased number of n-gram mismatches with the reference translation, despite the translation itself being of good quality.…”
Section: Human Translation As a Referencementioning
confidence: 97%
“…LSTM uses memory cells for retaining the values and also requires a minimal amount of training data. Hence, the proposed system overcomes the existing systems by using the NMT model involving GRU LSTM in both the encoding and decoding phases (7)(8)(9) . The proposed work has been implemented using a large number of sentences counted to several thousand.…”
Section: Introductionmentioning
confidence: 99%