2022
DOI: 10.1016/j.jcmds.2022.100054
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning, stochastic gradient descent and diffusion maps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…In Table (2), we examine the Pearson correlation coefficient between the groups. It can be seen that there is a high significant correlation (marked with **), a significant correlation (marked with *) and a weak correlation between the variables [29,30]. The estimated coefficient for the model was calculated in Table (3), some variables were highly significant since p value was less than 0.02, and some were significant where p value < 0.05.…”
Section: Real Datasetmentioning
confidence: 99%
“…In Table (2), we examine the Pearson correlation coefficient between the groups. It can be seen that there is a high significant correlation (marked with **), a significant correlation (marked with *) and a weak correlation between the variables [29,30]. The estimated coefficient for the model was calculated in Table (3), some variables were highly significant since p value was less than 0.02, and some were significant where p value < 0.05.…”
Section: Real Datasetmentioning
confidence: 99%
“…Adam differs from classical stochastic gradient descent. Stochastic gradient descent (Fjellström & Nyström, 2022), (Li et al, 2022) maintains a single learning rate (alpha) for all weight updates and the learning rate does not change during training. The learning rate is maintained for each network weight (parameter) and is adapted separately as learning progresses.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In the learning process of ANNs the aim is to find weights vector w ∈ ℝ m , what in turn can be considered as an optimization problem as defined in [8]. The optimal weights w * 𝒘 * = arg min 𝒘𝜖ℝ 𝑚 {𝑓(𝒘) =…”
Section: Scenario Faults Sc1mentioning
confidence: 99%
“…where: f : ℝ m → ℝ is the loss function, fi, for i ∈ {1, …N}, denotes the contribution to the loss function from data point i, N denotes the total number of data points [8]. The optimization algorithm implemented for the purpose of ANN training was stochastic gradient descent [42].…”
Section: Scenario Faults Sc1mentioning
confidence: 99%