2021
DOI: 10.1007/s10255-021-0991-2
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of Stochastic Gradient Descent in Deep Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 10 publications
0
2
0
1
Order By: Relevance
“…where: f : ℝ m → ℝ is the loss function, fi, for i ∈ {1, …N}, denotes the contribution to the loss function from data point i, N denotes the total number of data points [8]. The optimization algorithm implemented for the purpose of ANN training was stochastic gradient descent [42].…”
Section: Scenario Faults Sc1mentioning
confidence: 99%
“…where: f : ℝ m → ℝ is the loss function, fi, for i ∈ {1, …N}, denotes the contribution to the loss function from data point i, N denotes the total number of data points [8]. The optimization algorithm implemented for the purpose of ANN training was stochastic gradient descent [42].…”
Section: Scenario Faults Sc1mentioning
confidence: 99%
“…In Algorithm 1, we describe the pseudocode of our proposed DRL algorithm. We use the Stochastic Gradient Descent (SGD) algorithm [51] to perform Deep-Q-Network (DQN) agent training. Then, we tune the main hyperparameter to decide about optimal DNN configurations such as epoch/iteration numbers, optimizer parameters, and action selection strategies.…”
Section: B Dveap: the Proposed Dqn-drlmentioning
confidence: 99%
“…Ilustrasi penurunan gradien dapat dilihat pada Gambar 4. Stochastic gradient descent (SGD) akan digunakan untuk melatih CNN, dimana kelebihan dari SGD dibanding penurunan gradien total yaitu konvergen lebih cepat dan mampu untuk keluar dari local minimum [24].…”
Section: B Arsitektur Cnnunclassified