SMC'98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218)
DOI: 10.1109/icsmc.1998.728124
|View full text |Cite
|
Sign up to set email alerts
|

Extended Kalman filter neural network training: experimental results and algorithm improvements

Abstract: It is well known that the Extended Kalman Filter (EKF) neural network training algorithm is superior to the standard backpropagation algorithm.However. there are many variations on the EKF implementation that can significantly effect its performance. For example, improper initialization of three parameters cause the algorithm to perform poorly. There are also two advanced methods, de-coupling and multistreaming, which need to be properly applied based on the specifics of the problem. This paper presents the re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 5 publications
0
18
0
Order By: Relevance
“…And during the training of RMLP network, the recent phenomenon [10], [11] often occurs. So, during the training of the multi-input and multi-output modified RMLP network model, the decoupled extended Kalman filter (DEKF) method [10], [11], [12], multi-stream procedure and truncated backpropagation through time (BPTT) [10], [11] are used to accelerate the convergence and improve the training performance. For the training process of our model, Algorithm 2.1 is presented as follows.…”
Section: Problem Statementmentioning
confidence: 99%
“…And during the training of RMLP network, the recent phenomenon [10], [11] often occurs. So, during the training of the multi-input and multi-output modified RMLP network model, the decoupled extended Kalman filter (DEKF) method [10], [11], [12], multi-stream procedure and truncated backpropagation through time (BPTT) [10], [11] are used to accelerate the convergence and improve the training performance. For the training process of our model, Algorithm 2.1 is presented as follows.…”
Section: Problem Statementmentioning
confidence: 99%
“…Most of these techniques like quasi-Newton and Levenburg-Marquardt demonstrate better performance as they involve second-order derivative information. In addition, these algorithms are implemented in a batch (multistreaming) mode where weights are updated based on more than one training sample in the training set, in contrast with the conventional BP where weights are updated by involving only one training sample (a serial mode) [3]. Even though second-order algorithms have proven to outperform the classical first-order BP, they may suffer from poor convergence properties due to problems with local minima [2].…”
Section: Introductionmentioning
confidence: 99%
“…It involves training M multiple identical parallel neural networks using several training samples followed by a single weight update using the entire M networks errors. The above algorithm can be adjusted to multi-streaming mode by replacing m with M x m [3].…”
mentioning
confidence: 99%
“…This is because it is easy to implement and exhibit computationally efficient calculation which is especially useful for nonlinear systems and practical applications see [14]. There are many variables that affect EKF training algorithm performances.…”
Section: Introductionmentioning
confidence: 99%
“…These variables are matrices that must be correctly initialized otherwise the EKF training algorithm can exhibit poor performance. These matrices are the estimation error covariance matrix (P), the measurement covariance matrix (R), and the additional process noise matrix (Q) [14]. the iterative version of EKF is the iterative extended Kalman filter (IEKF), which improves the linearization of the extended Kalman filter by recursively, this version is powerful than the standard EKF for neural network training [15].…”
Section: Introductionmentioning
confidence: 99%