2011
DOI: 10.1016/b978-0-12-385983-9.00001-6
|View full text |Cite
|
Sign up to set email alerts
|

The Synthesis of a Stochastic Artificial Neural Network Application Using a Genetic Algorithm Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
0
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 65 publications
0
0
0
Order By: Relevance
“…There are several techniques used for NNs favourable performance for training ANN such as evolutionary algorithms (EA), Multi-objective hybrid evolutionary algorithms (Qasem, Shamsuddin, & Zain, 2012;Tallón-Ballesteros & Hervás-Martínez, 2011), Genetic algorithms (GA) (Blanco, Delgado, & Pegalajar, 2001;García-Pedrajas, Ortiz-Boyer, & Hervás-Martínez, 2006;Geretti & Abramo, 2011), Particle swarm optimization (PSO) (Al-Shareef & Abbod, 2010;Hong-Bo, Yi-Yuan, Jun, & Ye, 2004;Hongwen & Rui, 2006), deferential evolution (DE) (Slowik & Bialko, 2008;Subudhi & Jena, 2011), an colony optimization (ACO) (Ashena & Moghadasi, 2011;Blum & Socha, 2005), BP d improved BP algorithm (Nawi, Ransing, & Ransing, 2006;Yan, Zhongjun, & Jiayu, 2010). These techniques are used for initialization of optimum weights, parameters, activation function, and selection of a proper network structure.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are several techniques used for NNs favourable performance for training ANN such as evolutionary algorithms (EA), Multi-objective hybrid evolutionary algorithms (Qasem, Shamsuddin, & Zain, 2012;Tallón-Ballesteros & Hervás-Martínez, 2011), Genetic algorithms (GA) (Blanco, Delgado, & Pegalajar, 2001;García-Pedrajas, Ortiz-Boyer, & Hervás-Martínez, 2006;Geretti & Abramo, 2011), Particle swarm optimization (PSO) (Al-Shareef & Abbod, 2010;Hong-Bo, Yi-Yuan, Jun, & Ye, 2004;Hongwen & Rui, 2006), deferential evolution (DE) (Slowik & Bialko, 2008;Subudhi & Jena, 2011), an colony optimization (ACO) (Ashena & Moghadasi, 2011;Blum & Socha, 2005), BP d improved BP algorithm (Nawi, Ransing, & Ransing, 2006;Yan, Zhongjun, & Jiayu, 2010). These techniques are used for initialization of optimum weights, parameters, activation function, and selection of a proper network structure.…”
Section: Introductionmentioning
confidence: 99%
“…If the network topology is not carefully selected, the NNs algorithm can get trapped in local minima, or it might lead to slow convergence or even network failure. In order to overcome the disadvantages of standard BP, much global optimization population-based technique [20], GA (Geretti & Abramo, 2011), improved BP (Nawi, et al, 2006), DE (Slowik & Bialko, 2008), BP-ant colony (Chengzhi, Yifan, Lichao, & Yang, 2008), and PSO (Hongwen & Rui, 2006).…”
Section: Introductionmentioning
confidence: 99%
“…A feed forward process iterates for a defined period to reach the best prediction with a reduced amount of computation power and processing time. The neuron function model can be defined as - 64 𝑦𝑦 = ∅(𝐴𝐴(𝑥𝑥))…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…The effect of the weight in the loss function for feed-forward propagation can be expressed using the partial derivative, shown in (11) equation. 64 𝜕𝜕𝜕𝜕…”
Section: Artificial Neural Networkmentioning
confidence: 99%