2022
DOI: 10.17485/ijst/v15i1.1935
|View full text |Cite
|
Sign up to set email alerts
|

Text to Speech Synthesizer for Tigrigna Linguistic using Concatenative Based approach with LSTM model

Abstract: Objectives:The purpose of this study is to describe text-to-speech system for the Tigrigna language, using dialog fusion architecture and developing a prototype text-to-speech synthesizer for Tigrigna Language. Methods : The direct observation and review of articles are applied in this research paper to identify the whole strings which are represented the language. Tools used in this work are Mathlab, LPC, and python. In this paper LSTM deep learning model was applied to find out accuracy, precision, recall, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 12 publications
(20 reference statements)
0
3
0
Order By: Relevance
“…Each feature identified in the voice samples has been analyzed and incorporated into this database. A total of three female speakers provided the voice samples (20,21) . Recording female speakers' voices is required since "Mary" Text to Speech Synthesis needs a female voice as its output.…”
Section: Resultsmentioning
confidence: 99%
“…Each feature identified in the voice samples has been analyzed and incorporated into this database. A total of three female speakers provided the voice samples (20,21) . Recording female speakers' voices is required since "Mary" Text to Speech Synthesis needs a female voice as its output.…”
Section: Resultsmentioning
confidence: 99%
“…In the internal states of the network, these activations are retained and may give long-term temporal context data. As the input sequence history progresses, RNNs can take use of a dynamically changing contextual window (13) , (14) .…”
Section: Long Short Term Memory (Lstm)mentioning
confidence: 99%
“…The hand-shaped features were attained with the help of a single-layer Convolutional Self-Organizing Map (CSOM) instead of a pre-trained Deep Convolutional Neural Network (CNN). In literature [15], a prototype was developed for a text-to-speech synthesizer for Tigrigna Language.…”
Section: Introductionmentioning
confidence: 99%