Proceedings of International Conference on Neural Networks (ICNN'97)
DOI: 10.1109/icnn.1997.614247
|View full text |Cite
|
Sign up to set email alerts
|

RBF neural network, basis functions and genetic algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0
2

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(21 citation statements)
references
References 5 publications
0
19
0
2
Order By: Relevance
“…Evolino evolves weights to the nonlinear, hidden nodes while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression (Penrose, 1955) or support vector machines (Vapnik, 1995), depending on the notion of optimality employed. This generalizes methods such as those of Maillard (Maillard & Gueriot, 1997) and Ishii et al (Ishii, van der Zant, Bečanović, & Plöger, 2004;van der Zant, Bečanović, Ishii, Kobialka, & Plöger, 2004) that evolve radial basis functions and ESNs, respectively. Applied to the LSTM architecture, Evolino can solve tasks that ESNs (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM (G-LSTM).…”
Section: Introductionmentioning
confidence: 99%
“…Evolino evolves weights to the nonlinear, hidden nodes while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression (Penrose, 1955) or support vector machines (Vapnik, 1995), depending on the notion of optimality employed. This generalizes methods such as those of Maillard (Maillard & Gueriot, 1997) and Ishii et al (Ishii, van der Zant, Bečanović, & Plöger, 2004;van der Zant, Bečanović, Ishii, Kobialka, & Plöger, 2004) that evolve radial basis functions and ESNs, respectively. Applied to the LSTM architecture, Evolino can solve tasks that ESNs (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM (G-LSTM).…”
Section: Introductionmentioning
confidence: 99%
“…There has been some work in this area where good results were reported [119], [120], [245]- [253]. In general, ANN's using distributed representation are more compact and have better generalization capability for most practical problems.…”
Section: Simultaneous Evolution Of Architectures and Connection Wementioning
confidence: 99%
“…The weights of recurrent part of the network are evolved, while the weights of the output layer are computed analytically when the recurrent subnetwork is evaluated during evolution. This procedure generalizes ideas from Maillard [12], in which a similar hybrid approach was used train feedforward networks of radial basis functions.…”
Section: The Evolino Algorithmmentioning
confidence: 99%