2022 IEEE 7th International Conference for Convergence in Technology (I2CT) 2022
DOI: 10.1109/i2ct54291.2022.9824167
|View full text |Cite
|
Sign up to set email alerts
|

Music Generation using Time Distributed Dense Stateful Char-RNNs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 10 publications
0
0
0
Order By: Relevance
“…However, the features have several highly correlated variables in the multi-time series data problems. Random initialization of a large number of neurons is a challenge of the LTSM method that will lead the learning algorithm to converge to different local minima, depending on the values of the parameter initialization [9]. Therefore, an SAE has recently been widely applied in many fields to overcome the randomized initial weight obstacle of LSTM algorithms.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the features have several highly correlated variables in the multi-time series data problems. Random initialization of a large number of neurons is a challenge of the LTSM method that will lead the learning algorithm to converge to different local minima, depending on the values of the parameter initialization [9]. Therefore, an SAE has recently been widely applied in many fields to overcome the randomized initial weight obstacle of LSTM algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…• Time distributed dense layer adopts a fully connected dense layer on each decoder output [9]. VOLUME 11, 2023 During the training phase, the adaptive moment estimation (Adam) algorithm (one of the best stochastic optimizer techniques for deep learning models) is chosen as the optimizer, with a constant learning rate of lr = 0.001 [42].…”
mentioning
confidence: 99%