2006
DOI: 10.1007/11759966_105
|View full text |Cite
|
Sign up to set email alerts
|

Evolving Neural Networks Using the Hybrid of Ant Colony Optimization and BP Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
24
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 34 publications
(24 citation statements)
references
References 10 publications
0
24
0
Order By: Relevance
“…Some researchers have demonstrated that standalone evolutionary training is faster than gradient descent training (Montana and Davis, 1989), whereas other researchers have demonstrated that there is no any significant difference between the two types of training and the difference really depends on the problem (Socha and Blum, 2007). However, hybrid training with a GA (Alba and Chicano, 2004) or ACO (Liu et al, 2006;Socha and Blum, 2007;Mavrovouniotis and Yang, 2013) usually performs better than standalone metaheuristics or gradient descent algorithms. This is due to the fact that metaheuristics are global optimization algorithms and they are less sensitive on the initial condition of training whereas local optimization algorithms find the local optimum in the neighbourhood of the initial weights given.…”
Section: Training Artificial Neural Networkmentioning
confidence: 99%
See 4 more Smart Citations
“…Some researchers have demonstrated that standalone evolutionary training is faster than gradient descent training (Montana and Davis, 1989), whereas other researchers have demonstrated that there is no any significant difference between the two types of training and the difference really depends on the problem (Socha and Blum, 2007). However, hybrid training with a GA (Alba and Chicano, 2004) or ACO (Liu et al, 2006;Socha and Blum, 2007;Mavrovouniotis and Yang, 2013) usually performs better than standalone metaheuristics or gradient descent algorithms. This is due to the fact that metaheuristics are global optimization algorithms and they are less sensitive on the initial condition of training whereas local optimization algorithms find the local optimum in the neighbourhood of the initial weights given.…”
Section: Training Artificial Neural Networkmentioning
confidence: 99%
“…For example, in many cases the initial weights selected for back-propagation may lead to a very poor local optimum. To address this issue, some hybrid training algorithms use the best values obtained from ACO training as the initial weights for back-propagation training (Liu et al, 2006;Mavrovouniotis and Yang, 2013). The idea is to select a promising neighbourhood by ACO training and then search for the optimum by gradient descent training.…”
Section: Training Artificial Neural Networkmentioning
confidence: 99%
See 3 more Smart Citations