2005
DOI: 10.1109/tnn.2005.852237
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneous Perturbation Learning Rule for Recurrent Neural Networks and Its FPGA Implementation

Abstract: Recurrent neural networks have interesting properties and can handle dynamic information processing unlike ordinary feedforward neural networks. However, they are generally difficult to use because there is no convenient learning scheme. In this paper, a recursive learning scheme for recurrent neural networks using the simultaneous perturbation method is described. The detailed procedure of the scheme for recurrent neural networks is explained. Unlike ordinary correlation learning, this method is applicable to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 76 publications
(28 citation statements)
references
References 25 publications
0
26
0
Order By: Relevance
“…At all these stages, the ANN was adapted using the SP method in order to obtain the best results. The process of training and initialization was modified and implemented as described by Maeda [50][51][52] (Fig. 2).…”
Section: Proposed Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…At all these stages, the ANN was adapted using the SP method in order to obtain the best results. The process of training and initialization was modified and implemented as described by Maeda [50][51][52] (Fig. 2).…”
Section: Proposed Systemmentioning
confidence: 99%
“…Therefore, it is relatively easy to implement as a learning rule of ANNs compared with other learning rules such as the back propagation (BP) learning rule. At the same time, this learning rule is easily applicable to recurrent types of ANNs, since only the final error values are required to estimate the gradient of the error function with respect to the weights [50][51][52]. The ANN has been also chosen mainly because of its adaptability to the nonlinear and time-varying features of the noise.…”
Section: Introductionmentioning
confidence: 99%
“…J.AIespector et al and G.Cauwenberghs also individually proposed a parallel gradient descent method and stochastic error descent algorithm, respectively, which are identical to the simultaneous perturbation learning rule (Cauwenberghs, 1993) (Alespector et al, 1993). Many applications of the simultaneous perturbation are reported in the fields of neural networks (Maeda, 1997) and their hardware implementation (Maeda, 2003) (Maeda, 2005). The simultaneous perturbation method is described as follows;…”
Section: Simultaneous Perturbationmentioning
confidence: 99%
“…As a method for implementation, the FPGA approach, which uses reprogrammable digital ICs, is chosen since the usage of FPGA for neural network implementation provides flexibility of programmable systems along with the power and speed of parallel hardware architectures [10,11,12,13,14,15,16,17].…”
Section: Introductionmentioning
confidence: 99%