2021
DOI: 10.1007/s12652-021-03479-0
|View full text |Cite
|
Sign up to set email alerts
|

A comprehensive survey on machine translation for English, Hindi and Sanskrit languages

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 60 publications
0
4
0
Order By: Relevance
“…Several studies have assessed the efficacy of various word embedding models spanning diverse linguistic contexts and applications. Investigations have ranged from comparing pre-trained word embedding vectors for word-level semantic text similarity in Turkish [26] to evaluating neural machine translation (NMT) for languages such as English and Hindi [27]. Additionally, the accuracy of three prominent word embedding models within the context of convolutional neural network (CNN) text classification [28] has been explored.…”
Section: Word Embeddingmentioning
confidence: 99%
“…Several studies have assessed the efficacy of various word embedding models spanning diverse linguistic contexts and applications. Investigations have ranged from comparing pre-trained word embedding vectors for word-level semantic text similarity in Turkish [26] to evaluating neural machine translation (NMT) for languages such as English and Hindi [27]. Additionally, the accuracy of three prominent word embedding models within the context of convolutional neural network (CNN) text classification [28] has been explored.…”
Section: Word Embeddingmentioning
confidence: 99%
“…(1) S = Decoder(y, H), (2) We choose the Transformer as the foundation of the NMT model because of its superior multilingual performance (Bawa &Kumar, 2021). A self-attention sub-layer and a point-wise feed-forward sub-layer are present in each layer of the encoder's stack of L= 6 identical layers.…”
Section: 𝐇 = Encoder([t X])mentioning
confidence: 99%
“…Several studies have been conducted, spanning diverse linguistic contexts and applications, to assess the efficacy of various word embedding models. Investigations have ranged from comparing pre-trained word embedding vectors for word-level semantic text similarity in Turkish [26] to evaluating Neural Machine Translation (NMT) for languages such as English and Hindi [27]. Additionally, an exploration of the accuracy of three prominent word embedding models within the context of Convolutional Neural Network (CNN) text classification [28] has been undertaken.…”
Section: Word Embeddingmentioning
confidence: 99%