2019
DOI: 10.1007/s12065-019-00269-8
|View full text |Cite
|
Sign up to set email alerts
|

Numerical optimization and feed-forward neural networks training using an improved optimization algorithm: multiple leader salp swarm algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(16 citation statements)
references
References 65 publications
0
13
0
Order By: Relevance
“…From Tables 7, 8, 9, and 10 it can be deduced that the proposed optimized DA algorithm has a higher effectiveness as compared to multiple other swarm intelligence algorithms in training artificial neural networks. For the iris, glass, and breast cancer datasets, the ANN trained by the proposed optimized DA achieves a higher accuracy than all other swarm intelligence algorithms used to train ANNs in [47], [48], and [49]. For the balloon dataset, the accuracy obtained by the ANN trained by the optimized DA is 100% which is the same as the ANNs trained by several other algorithms.…”
Section: Resultsmentioning
confidence: 79%
See 3 more Smart Citations
“…From Tables 7, 8, 9, and 10 it can be deduced that the proposed optimized DA algorithm has a higher effectiveness as compared to multiple other swarm intelligence algorithms in training artificial neural networks. For the iris, glass, and breast cancer datasets, the ANN trained by the proposed optimized DA achieves a higher accuracy than all other swarm intelligence algorithms used to train ANNs in [47], [48], and [49]. For the balloon dataset, the accuracy obtained by the ANN trained by the optimized DA is 100% which is the same as the ANNs trained by several other algorithms.…”
Section: Resultsmentioning
confidence: 79%
“…6, it can be seen that the optimized DA algorithm converges to the optimal solution at around iteration 15 while the original DA converges at around iteration 18. Moreover, it can be seen that the optimized DA In order to have a fair comparison, the optimized DA algorithm is used for training ANNs with the same architecture as the other works in [47], [48], and [49] for the four datasets. For the iris dataset, the following architecture is used: one input layer with four neurons, one hidden layer with nine neurons, and one output layer with three neurons, for the balloon dataset, the following architecture: one input layer with four neurons, one hidden layer with nine neurons, and one output layer with one neuron, for the glass dataset, the architecture used is: one input layer with nine neurons, one hidden layer with 19 neurons, and one output layer with one neuron, and for the breast cancer dataset, the following architecture is used: one input layer with nine neurons, one hidden layer with 19 neurons, and one output layer with one neuron.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The results showed the superiority of the proposed approach compared to the results of other algorithms. 53 Researchers have also focused on solving the problem of training ANNs using multiple leader salp swarm algorithm, 54 hybrid particle swarm optimization-genetic algorithm (PSO-GA), 55 Dragonfly Algorithm, 56 social-learning particle swarm optimization algorithm (SL-PSO), 57 tree-seed algorithm (TSA), 58 a novel Hybrid Sine Cosine Algorithm, 59 Artificial Algae Algorithm, 60 and PSO. 61 Studies in the literature indicate that different optimization algorithms have been used in the training of ANNs.…”
Section: Mrfo Algorithm Applied To Feedforward Neural Networkmentioning
confidence: 99%