2021
DOI: 10.1016/j.sigpro.2021.108014
|View full text |Cite
|
Sign up to set email alerts
|

Convergence behavior of diffusion stochastic gradient descent algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…This algorithm has the advantages of simplicity, low computational cost, fast convergence, and reliable effect [63,64]. Several scholars have already exploited the SGD model to address large-scale problems and obtain accurate findings [65,66]. However, the BN, DTable, and RBFN models also performed better, as the AUC value was greater than 0.8 on both the training and validation datasets.…”
Section: Discussionmentioning
confidence: 99%
“…This algorithm has the advantages of simplicity, low computational cost, fast convergence, and reliable effect [63,64]. Several scholars have already exploited the SGD model to address large-scale problems and obtain accurate findings [65,66]. However, the BN, DTable, and RBFN models also performed better, as the AUC value was greater than 0.8 on both the training and validation datasets.…”
Section: Discussionmentioning
confidence: 99%
“…SGD is elucidated as an iterative optimization methodology, applicable to unconstrained optimization problems [20]. Its utility extends to identifying optimal parameter configurations for ML algorithms and optimizing objective functions with requisite smoothing properties [21]. Recognized for its simplicity and efficacy, SGD is particularly well-suited for fitting linear classifiers and regressors under convex loss functions [20].…”
Section: Sgdmentioning
confidence: 99%
“…For deterministic optimization algorithms, such as sequential minimal optimization (SMO) [8] and stochastic gradient descent (SGD) [9], if the objective function is discontinuous and nondifferentiable, their convergence speed is usually slow and they will easily fall into the local optimum. As a stochastic optimization method, the swarm intelligence optimization methods introduce a brand new path to solve global optimization problems by taking advantage of randomness.…”
Section: Introductionmentioning
confidence: 99%