2010
DOI: 10.1109/tnn.2010.2041067
|View full text |Cite
|
Sign up to set email alerts
|

Memory-Efficient Fully Coupled Filtering Approach for Observational Model Building

Abstract: Generally, training neural networks with the global extended Kalman filter (GEKF) technique exhibits excellent performance at the expense of a large increase in computational costs which can become prohibitive even for networks of moderate size. This drawback was previously addressed by heuristically decoupling some of the weights of the networks. Inevitably, ad hoc decoupling leads to a degradation in the quality (accuracy) of the resultant neural networks. In this paper, we present an algorithm that emulates… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Only results on the boundedness of the expected quadratic error are reported in the literature [46]. In spite of its poor theoretical foundation, a number of successful results concerning the application of the EKF to neural network training are available [24]- [28]. Finally, it is worth noting that, in our context and likewise for line-search descent methods, the Newton and quasi-Newton methods are not ensured to converge to a global optimum since the optimization problem we have to solve is not convex.…”
Section: B Newton-based Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Only results on the boundedness of the expected quadratic error are reported in the literature [46]. In spite of its poor theoretical foundation, a number of successful results concerning the application of the EKF to neural network training are available [24]- [28]. Finally, it is worth noting that, in our context and likewise for line-search descent methods, the Newton and quasi-Newton methods are not ensured to converge to a global optimum since the optimization problem we have to solve is not convex.…”
Section: B Newton-based Optimizationmentioning
confidence: 99%
“…The second one is based on an optimization derived from the Newton method. In practice, the selection of the weights is accomplished by means of an EKF learning procedure [24]- [28]. Both approaches require to compute the gradient of the cost w.r.t.…”
Section: Introductionmentioning
confidence: 99%