The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2008
DOI: 10.1002/acs.1080
|View full text |Cite
|
Sign up to set email alerts
|

Use of Kullback–Leibler divergence for forgetting

Abstract: Non-symmetric Kullback-Leibler divergence (KLD) measures proximity of probability density functions (pdfs). Bernardo (Ann. Stat. 1979; 7(3):686-690) had shown its unique role in approximation of pdfs. The order of the KLD arguments is also implied by his methodological result. Functional approximation of estimation and stabilized forgetting, serving for tracking of slowly varying parameters, use the reversed order. This choice has the pragmatic motivation: recursive estimator often approximates the parametric … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Although the resulting model does not specify the transition matrix of the Markov chain explicitly, Smith and Miller (1986) argued that this is not a defect of the model, since the data provide information about π t −1| t −1, k and π t | t −1, k , but no additional information about Q . Exponential forgetting has been used for updating discrete probabilities in a different two-hypothesis context by Kárný and Andrýsek (2009).…”
Section: Dynamic Model Averaging (Dma)mentioning
confidence: 99%
“…Although the resulting model does not specify the transition matrix of the Markov chain explicitly, Smith and Miller (1986) argued that this is not a defect of the model, since the data provide information about π t −1| t −1, k and π t | t −1, k , but no additional information about Q . Exponential forgetting has been used for updating discrete probabilities in a different two-hypothesis context by Kárný and Andrýsek (2009).…”
Section: Dynamic Model Averaging (Dma)mentioning
confidence: 99%
“…At the Bayesian level, a sort of forgetting arises through combining the posterior pdf with its flattened alternative. The combination strategies prominently involve the nonsymmetric Kullback-Leibler divergence (KLD) [8] with different properties depending on the order of the KLD arguments [9]. There is rich literature on the adaptation of a single forgetting factor causing the information about all of the system parameters to be uniformly discounted [10]- [13].…”
Section: Introductionmentioning
confidence: 99%