2014 International Joint Conference on Neural Networks (IJCNN) 2014
DOI: 10.1109/ijcnn.2014.6889933
|View full text |Cite
|
Sign up to set email alerts
|

Training high-dimensional neural networks with cooperative particle swarm optimiser

Abstract: This paper analyses the behaviour of particle swarm optimisation applied to training high-dimensional neural networks. Despite being an established neural network training algorithm, particle swarm optimisation falls short at training high-dimensional neural networks. Reasons for poor performance of PSO are investigated in this paper, and hidden unit saturation is hypothesised to be a cause of the failure of PSO in training highdimensional neural networks. An analysis of various activation functions and search… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Although a number of studies have proposed the use of populationbased algorithms for training neural networks [9]- [14], they are seldom used in practice. One of the challenges is that the search space of neural network weights is unbounded and algorithms such as particle swarm optimisation may fail due to high weight magnitudes leading to hidden unit saturation [17], [18]. The one domain where population-based metaheuristics have shown competitive results compared to gradient-based methods is in deep reinforcement learning tasks [19].…”
Section: Introductionmentioning
confidence: 99%
“…Although a number of studies have proposed the use of populationbased algorithms for training neural networks [9]- [14], they are seldom used in practice. One of the challenges is that the search space of neural network weights is unbounded and algorithms such as particle swarm optimisation may fail due to high weight magnitudes leading to hidden unit saturation [17], [18]. The one domain where population-based metaheuristics have shown competitive results compared to gradient-based methods is in deep reinforcement learning tasks [19].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, PSO is not susceptible to the gradient problem [31]. It seems that the PSO method is a potentially good deep learning approach, but research has indicated that the pure PSO training method struggles to train relatively large neural networks [32]. Reference [33] presented an integration framework of the PSO and GD for the LSTM when identifying handwriting.…”
Section: Introductionmentioning
confidence: 99%