1998
DOI: 10.1007/bfb0056901
|View full text |Cite
|
Sign up to set email alerts
|

Evolutionary neural networks for nonlinear dynamics modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
15
0

Year Published

2001
2001
2016
2016

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 11 publications
0
15
0
Order By: Relevance
“…Some authors use binary or real encoding (representation of the networks in a binary or real number string), as proposed by [13,14], or indirect coding, as proposed by [8], but GProp evolves the initial parameters of the network (initial weights and learning constants) using specific genetic operators.…”
Section: Gprop Methodsmentioning
confidence: 99%
“…Some authors use binary or real encoding (representation of the networks in a binary or real number string), as proposed by [13,14], or indirect coding, as proposed by [8], but GProp evolves the initial parameters of the network (initial weights and learning constants) using specific genetic operators.…”
Section: Gprop Methodsmentioning
confidence: 99%
“…Some authors use binary or real encoding (representation of the networks in a binary or real number string) [37], [38], or indirect coding [39], [40], but G-Prop evolves the initial parameters of the network (initial weights and learning constants) using specific genetic operators. At the lowest level, an MLP is an object instantiated from the MLP C++ class.…”
Section: The Methodsmentioning
confidence: 99%
“…Normally, networks are trained using gradient descent, but recently evolutionary computation has been used to either evolve the network structure and then learn the weights via gradient descent (e.g. for Multi-Layer perceptrons [25,5,13], and Radial Basis Function Networks (RBF) [21]), or to bypass learning altogether by evolving weight values as well [26]. However, because these methods use feedforward architectures, none of them implement general sequence predictors.…”
Section: Introductionmentioning
confidence: 99%
“…That is, predictors that can detect temporal dependencies in the input data that span an arbitrary number of time steps. The MackeyGlass time-series that is often used to test these methods [21,25,3,5,13], for instance, can be predicted very accurately using a feedforward network with a relatively short time-delay window on the input.…”
Section: Introductionmentioning
confidence: 99%