2000
DOI: 10.1109/72.839017
|View full text |Cite
|
Sign up to set email alerts
|

A local linearized least squares algorithm for training feedforward neural networks

Abstract: In training the weights of a feedforward neural network, it is well known that the global extended Kalman filter (GEKF) algorithm has much better performance than the popular gradient descent with error backpropagation in terms of convergence and quality of solution. However, the GEKF is very computationally intensive, which has led to the development of efficient algorithms such as the multiple extended Kalman algorithm (MEKA) and the decoupled extended Kalman filter algorithm (DEKF), that are based on dimens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2002
2002
2022
2022

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…Like localized EKF, a complex problem can be divided into multiple localized subproblems, each of which is solved by RLS. There exist some localized RLS algorithms, such as the local linearized LS (LLLS) method [301] and the block RLS (BRLS) algorithm [302].…”
Section: Recursive Least Squaresmentioning
confidence: 99%
“…Like localized EKF, a complex problem can be divided into multiple localized subproblems, each of which is solved by RLS. There exist some localized RLS algorithms, such as the local linearized LS (LLLS) method [301] and the block RLS (BRLS) algorithm [302].…”
Section: Recursive Least Squaresmentioning
confidence: 99%
“…In the "regression" submenu of SPSS, a very rich and powerful regression modelling function is provided. Based on the idea of least squares [31], for the regression model of more than one independent variable, the application of SPSS software can easily get the desired results. The weak impact on the working face is relatively small.…”
Section: Design Of Bp Neural Network Based On Matlabmentioning
confidence: 99%
“…Thirdly, most of them are designed for training FNNs, and only a few are designed for training RNNs [31]. Lastly, most of them have high computational and space complexity, and some of them even require preserving one inverse autocorrelation matrix for each neuron [29], [33], [34]. As a result, the linear or recursive least squares methods seem to have been forgotten in DL.…”
Section: Introductionmentioning
confidence: 99%