2016
DOI: 10.21609/jiki.v9i1.366
|View full text |Cite
|
Sign up to set email alerts
|

Particle Swarm Optimization (Pso) for Training Optimization on Convolutional Neural Network (Cnn)

Abstract: Neural network attracts plenty of researchers lately. Substantial number of renowned universities have developed neural network for various both academically and industrially applications. Neural network shows considerable performance on various purposes. Nevertheless, for complex applications, neural network’s accuracy significantly deteriorates. To tackle the aforementioned drawback, lot of researches had been undertaken on the improvement of the standard neural network. One of the most promising modificatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(14 citation statements)
references
References 13 publications
(15 reference statements)
0
14
0
Order By: Relevance
“…The weight is divided by the convolution layer while the sampling function is carried out by the pooling layer for the output generated by the convolution layer and the data rate is reduced from the layer below it. Output for several fully connected layers uses the output from the pooling layer [9].…”
Section: Convolutional Neural Network (Cnn)mentioning
confidence: 99%
“…The weight is divided by the convolution layer while the sampling function is carried out by the pooling layer for the output generated by the convolution layer and the data rate is reduced from the layer below it. Output for several fully connected layers uses the output from the pooling layer [9].…”
Section: Convolutional Neural Network (Cnn)mentioning
confidence: 99%
“…So the particle change their positions and it gets updated to the nearest one which is considered as its best position. So the evolving process helps it to find the optimum one [76].…”
Section: B Grey Wolf Optimizationmentioning
confidence: 99%
“…The hyper-parameters in the Deep-Res-Bidir-LSTM network affect the final result. Generally used methods of tuning parameters include experimental methods, grid searches [26], genetic algorithm (GA) [27], and particle swarm optimization (PSO) [28] [29]. As experimental methods involve approximating the value by running many experiments, these methods are time consuming.…”
Section: Hyper-parameters Settingmentioning
confidence: 99%