2022
DOI: 10.1007/s11042-022-14273-1
|View full text |Cite
|
Sign up to set email alerts
|

A review of machine transliteration, translation, evaluation metrics and datasets in Indian Languages

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 61 publications
0
4
0
Order By: Relevance
“…Both of these works use a cascade approach. As highlighted earlier, building a pipeline for English to Punjabi speech translation using the cascade approach is limited by the unavailability of state-of-the-art ASR [30], [9], MT [31], and S2T models for Punjabi. Specifically, there is no S2T model available for Punjabi [29].…”
Section: Direct Speech-to-speech Translation (S2st) Modelsmentioning
confidence: 99%
“…Both of these works use a cascade approach. As highlighted earlier, building a pipeline for English to Punjabi speech translation using the cascade approach is limited by the unavailability of state-of-the-art ASR [30], [9], MT [31], and S2T models for Punjabi. Specifically, there is no S2T model available for Punjabi [29].…”
Section: Direct Speech-to-speech Translation (S2st) Modelsmentioning
confidence: 99%
“…Input: In NER approaches, there are mainly two types of problem faced for input i.e., code mixing and transliteration. To create NER dataset for Indian Languages that covers different entities like name of person, place, organization and date etc., which is time consuming or labor-intensive process [114]. The challenges of input process are discussed below:…”
Section: Issues and Challengesmentioning
confidence: 99%
“…One of the key drivers behind this progress is the development and adoption of neural machine translation (NMT) models, which have significantly improved the quality and fluency of translations by leveraging deep learning techniques [11]. These models are adept at capturing complex grammatical structures, idiomatic expressions, and linguistic nuances, thereby producing translations that closely mimic natural language usage [12]. Additionally, transformer-based architectures, such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), have further advanced cross-language translation capabilities.…”
Section: Introductionmentioning
confidence: 99%