2020
DOI: 10.48550/arxiv.2012.11349
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A comparison of learning rate selection methods in generalized Bayesian inference

Abstract: Generalized Bayes posterior distributions are formed by putting a fractional power on the likelihood before combining with the prior via Bayes's formula. This fractional power, which is often viewed as a remedy for potential model misspecification bias, is called the learning rate, and a number of data-driven learning rate selection methods have been proposed in the recent literature. Each of these proposals has a different focus, a different target they aim to achieve, which makes them difficult to compare. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…While the asymptotic concentration results hold for any fixed choice of ω (with an appropriate corresponding choice of β in the context of Theorem 3), the fact that ω does affect the finite-sample performance means that data-driven choices are needed. As we briefly mentioned in Section 3, there is an active literature on data-dependent learning rate selection; see Wu and Martin (2020) for a comparison. However, it remains unclear how these various methods might perform in an especially challenging application such as this, especially one with an infinite-dimensional quantity of interest.…”
Section: Discussionmentioning
confidence: 99%
“…While the asymptotic concentration results hold for any fixed choice of ω (with an appropriate corresponding choice of β in the context of Theorem 3), the fact that ω does affect the finite-sample performance means that data-driven choices are needed. As we briefly mentioned in Section 3, there is an active literature on data-dependent learning rate selection; see Wu and Martin (2020) for a comparison. However, it remains unclear how these various methods might perform in an especially challenging application such as this, especially one with an infinite-dimensional quantity of interest.…”
Section: Discussionmentioning
confidence: 99%
“…The overall conclusion drawn in Wu and Martin (2020) is that only the GPC algorithm provides satisfactory calibration of the generalized posterior credible sets in general. This is not surprising, given that the other methods are designed to achieve other properties.…”
Section: Learning Rate Selection Methodsmentioning
confidence: 97%
“…A number of different data-driven methods for selecting the learning rate η have been proposed in the recent literature. Here we give just a brief summary of these; a more thorough review can be found in Wu and Martin (2020).…”
Section: Learning Rate Selection Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This provides a heuristic motivation for the default β = 1. However, in a misspecified setting smaller values of β are needed to avoid over-confidence in the generalised posterior, taking misspecification into account; see the recent review of Wu and Martin (2020). Here we aim to pick β such that the scale of the asymptotic covariance matrix of the generalised posterior (H −1 * ; Theorem 2) matches that of the minimum MMD point estimator (H −1 * J * H −1 * ; Lemma 4), an approach proposed in Lyddon et al (2019).…”
Section: Default Setting For βmentioning
confidence: 99%