2022
DOI: 10.1016/j.compeleceng.2022.107942
|View full text |Cite
|
Sign up to set email alerts
|

An LSTM-based model for the compression of acoustic inventories for corpus-based text-to-speech synthesis systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…In this paper, based on the time-series relationship of gas reservoir production decline curve, the deep learning algorithms suitable for time series analysis are selected. Among these algorithms, the LSTM neural network is preferred, as this type of neural network algorithm has great flexibility and has made significant progress in various fields through years of research by many scholars (Chi, 2022;Chung et al, 2022;Rojc and Mlakar, 2022).…”
Section: Lstm Neural Networkmentioning
confidence: 99%
“…In this paper, based on the time-series relationship of gas reservoir production decline curve, the deep learning algorithms suitable for time series analysis are selected. Among these algorithms, the LSTM neural network is preferred, as this type of neural network algorithm has great flexibility and has made significant progress in various fields through years of research by many scholars (Chi, 2022;Chung et al, 2022;Rojc and Mlakar, 2022).…”
Section: Lstm Neural Networkmentioning
confidence: 99%
“…Recent advances in deep learning have been applied to a variety of fields [3,4], including advances in audio classification and processing [5][6][7]. Similarly, in acoustic scene classification where earlier models relied mostly on intelligent and sophisticated feature engineering [8,9] are now utilizing deep learning in multiple applications without stringent computational requirements [10].…”
Section: Related Workmentioning
confidence: 99%
“…Each neuron in the visible layer is connected to all the neurons in the hidden layer, but there is no connection between the neurons in the same layer, and all the neurons have only two output states. In this case, the hidden layer and the visible layer are equivalent to the same features in more than one space in different expressions, so as to determine the initial value consistent with the weight of the actual situation [12].…”
Section: Key Technologies Of Deep Learningmentioning
confidence: 99%
“…(3) The reconstructed visible layer state value v is used as the input of RBM structure. The hidden layer probability h is calculated again according to step (1) (4) Update weight parameters according to formula (12), where <: > is the average value of all samples in each small batch and ε is the learning rate, as shown in…”
Section: Network Training Algorithmmentioning
confidence: 99%