2013
DOI: 10.1016/j.econlet.2012.10.002
|View full text |Cite
|
Sign up to set email alerts
|

A note on exact correspondences between adaptive learning algorithms and the Kalman filter

Abstract: Digressing into the origins of the two main algorithms considered in the literature of adaptive learning, namely Least Squares (LS) and Stochastic Gradient (SG), we found a connection between their non-recursive forms and their interpretation within a state-space unifying framework. Based on such connection, we extend the correspondence between the LS and the Kalman filter recursions to a formulation with time-varying gains of the former, and also present a similar correspondence for the case of the SG. Our co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5

Relationship

4
1

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 25 publications
0
17
0
Order By: Relevance
“…Under the assumption of a Gaussian random walk parameter drift model for φ t , Berardi and Galimberti (2013) have shown that R t is inversely related to the matrix of mean squared errors associated to the Kalman filter coefficients estimates, E φ t −φ t φ t −φ t . Hence, in a Bayesian interpretation, as R ∅ → 0 the prior becomes more diffuse, since it is associated with a higher uncertainty about the coefficients estimates 3 .…”
Section: Training Sample-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Under the assumption of a Gaussian random walk parameter drift model for φ t , Berardi and Galimberti (2013) have shown that R t is inversely related to the matrix of mean squared errors associated to the Kalman filter coefficients estimates, E φ t −φ t φ t −φ t . Hence, in a Bayesian interpretation, as R ∅ → 0 the prior becomes more diffuse, since it is associated with a higher uncertainty about the coefficients estimates 3 .…”
Section: Training Sample-based Methodsmentioning
confidence: 99%
“…The LS algorithm is originally motivated as the result from the minimization of a weighted sum of squared errors, where the weights are determined by the learning gain parameter (see Berardi and Galimberti, 2013). Hence, the learning gain stands for a parameter determining how quickly a given information is incorporated into the algorithm's coefficients estimates.…”
Section: Algorithm 1 (Ls)mentioning
confidence: 99%
“…To obtain the smoothed initials associated to the learning algorithms, we make use of a parallel drawn in Berardi and Galimberti (2013) between these algorithms and the Kalman filter applied to the estimation of a time-varying parameters model (see also McGough, 2003). More specifically, we start by establishing a state-space framework where the coefficients vector of the linear model in (1) evolves according to…”
Section: Smoothing Recursionsmentioning
confidence: 99%
“…More specifically, Berardi and Galimberti (2013) have recently shown how to extend the asymptotic correspondences between these algorithms to hold exactly in transient phases too, hence allowing for a unified approach to initializations. From these correspondences, long standing Kalman smoothing results can be readily translated into smoothing routines for the estimates obtained from each of the above learning algorithms, and we develop our routine using these premises in Section 3.…”
Section: Introductionmentioning
confidence: 99%
“…Under a constant gain specification, β (t, i) = (1 −γ) t−i , so that past observations are given geometrically decaying weights, whereas a decreasing gain leads to the famous OLS estimator of basic econometrics (see Berardi and Galimberti, 2013, for derivations). These properties may provide an explanation for the prominence of the LS algorithm in the adaptive learning literature as the choice to represent agents mechanism of adaptive learning; here we follow such practice and focus our calibration analysis on the LS case 5 .…”
Section: Model and Estimationmentioning
confidence: 99%