2012
DOI: 10.1016/j.asoc.2011.09.001
|View full text |Cite
|
Sign up to set email alerts
|

Multi-objective optimization of a stacked neural network using an evolutionary hyper-heuristic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(21 citation statements)
references
References 26 publications
0
21
0
Order By: Relevance
“…NSGAII learned to choose from a set of rules representing a constructive heuristic for 2D irregular stock cutting. In (Furtuna et al, 2012) a multi-objective hyper-heuristic for the design and optimization of a stacked neural network is proposed. The proposed approach is based on NSGAII combined with a local search algorithm (Quasi-Newton algorithm).…”
Section: Introductionmentioning
confidence: 99%
“…NSGAII learned to choose from a set of rules representing a constructive heuristic for 2D irregular stock cutting. In (Furtuna et al, 2012) a multi-objective hyper-heuristic for the design and optimization of a stacked neural network is proposed. The proposed approach is based on NSGAII combined with a local search algorithm (Quasi-Newton algorithm).…”
Section: Introductionmentioning
confidence: 99%
“…Hierarchical genetic algorithms, which used parametric and control genes to construct the chromosome, were applied for neuroevolution by Elhachmi and Guennoun [126]. On the side of ANN training procedures the focus is in recent years on novel combinations of GA with gradient-based or local optimization methods, which were used to address the problem of stock market time-series prediction [120] and optimize multi-objective processes in material synthesis [128].…”
Section: Hybrid Ann+easmentioning
confidence: 99%
“…For MWO, value of the shape parameter of the levy flight of 1.8, short range reference coefficient of 0.11, long range reference coefficient of 0.75, moving coefficient of 0.63, 1.26 and 1.05, walking scale of 0.1, Levy distribution of 1.5, and scale factor of space of 0.5 are selected [63]. For lower-level training of the soft TDNN sensor, sigmoid transformation function, fully complex architecture, Levenberg-Marquardt training method, epochs number of 100, learning rate of 0.1, and the range of weights and thresholds of [-1,1] are selected [69]. It should be also mentioned that all of the hyper-level BIC architecture evolving algorithms continue the optimization for 1000 function evaluations.…”
Section: Parameter Selection and Performance Metrics Verificationmentioning
confidence: 99%