2008
DOI: 10.1109/tnn.2008.2000396
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Feedforward Models Trained by Backpropagation: A Practical Training Tool for a More Efficient Universal Approximator

Abstract: Cellular simultaneous recurrent neural network (SRN) has been shown to be a function approximator more powerful than the multilayer perceptron (MLP). This means that the complexity of MLP would be prohibitively large for some problems while SRN could realize the desired mapping with acceptable computational constraints. The speed of training of complex recurrent networks is crucial to their successful application. This work improves the previous results by training the network with extended Kalman filter (EKF)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
24
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(26 citation statements)
references
References 15 publications
1
24
0
Order By: Relevance
“…For Test System I, if the same topology is to be implemented on an MLP, four time-delayed neural networks can be obtained by combining (6) to (9) and replacing the predicted outputs with time delayed values of the actual signal. For example, output of MLP for generator G1 can be obtained using (6), (7) and (9) as follows:…”
Section: ) Performance Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…For Test System I, if the same topology is to be implemented on an MLP, four time-delayed neural networks can be obtained by combining (6) to (9) and replacing the predicted outputs with time delayed values of the actual signal. For example, output of MLP for generator G1 can be obtained using (6), (7) and (9) as follows:…”
Section: ) Performance Comparisonmentioning
confidence: 99%
“…Such a CNN consisting of SRNs as cells are called Cellular SRN (CSRN) and that containing multilayer perceptron (MLP) as cells are called Cellular MLP (CMLP). CSRNs have been used in maze navigation problem [8], [9], facial recognition [10] and image processing [11]. Stability of recurrent neural networks in the presence of noise and time delays is discussed in [12].…”
mentioning
confidence: 99%
“…Within this square matrix, each cell is classified as an obstacle (black), pathway (white), or the goal (red circle). The original CSRN [5] solves a maze by calculating the number of steps to the goal from any cell along the pathway. The network does not solve the exact number of steps towards the goal, but rather solves a maze in such a way that from any pathway cell the nearest neighbor with the lowest value will point in the direction of the shortest path to the goal.…”
Section: A Maze Backgroundmentioning
confidence: 99%
“…But in order to handle general nonlinear decision problems, the best ADP designs all require several components which learn to approximate unknown nonlinear mappings. Neural networks are essential to handling this task well, in general complex environments, because several types of neural networks provide more accurate universal approximation than any classical alternatives [9,10]. They also make it possible to use new chips in the Cellular Neural Network family which already offer thousands of processors in parallel on a single commercially available chip.…”
Section: The Technology To Meet the Needmentioning
confidence: 99%