2019
DOI: 10.1016/j.neucom.2018.07.080
|View full text |Cite
|
Sign up to set email alerts
|

A survey on metaheuristic optimization for random single-hidden layer feedforward neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 67 publications
(18 citation statements)
references
References 117 publications
0
18
0
Order By: Relevance
“…We used the adaptive moment estimation optimization algorithm (also known as the Adam optimizer) to optimize the momentum and learning rate [61]. Generally, the Adam optimizer is more broadly applied in neural networks [62][63][64]. e Adam optimizer can be used instead of the classical stochastic gradient descent procedure to iteratively update network weights based on training data.…”
Section: Modelingmentioning
confidence: 99%
“…We used the adaptive moment estimation optimization algorithm (also known as the Adam optimizer) to optimize the momentum and learning rate [61]. Generally, the Adam optimizer is more broadly applied in neural networks [62][63][64]. e Adam optimizer can be used instead of the classical stochastic gradient descent procedure to iteratively update network weights based on training data.…”
Section: Modelingmentioning
confidence: 99%
“…The reference [14] presented a survey on randomized methods for training ANNs. The reference [15] investigated random single-hidden layer feedforward neural network based on metaheuristic and non-iterative learning approach. Application of recurrent ANNs in the field of statistical language modeling can be found in [16].…”
Section: An Overview About Annmentioning
confidence: 99%
“…Nevertheless, the employed neural computing approaches have mainly relied on the conventional gradient descent for model training [7,8,37,46]. Although this conventional training method can help to attain acceptable results in many application cases, it also suffers from slow convergence rate and trapping in local optimal [47].…”
Section: Introductionmentioning
confidence: 99%